Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Logistic Approximation to the Normal: The KL Rationale
ERIC Educational Resources Information Center
Savalei, Victoria
2006-01-01
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
Box-Cox transformation of firm size data in statistical analysis
NASA Astrophysics Data System (ADS)
Chen, Ting Ting; Takaishi, Tetsuya
2014-03-01
Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
Normal and compound poisson approximations for pattern occurrences in NGS reads.
Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu
2012-06-01
Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).
The Distribution of the Sum of Signed Ranks
ERIC Educational Resources Information Center
Albright, Brian
2012-01-01
We describe the calculation of the distribution of the sum of signed ranks and develop an exact recursive algorithm for the distribution as well as an approximation of the distribution using the normal. The results have applications to the non-parametric Wilcoxon signed-rank test.
ERIC Educational Resources Information Center
Edwards, Lynne K.; Meyers, Sarah A.
Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
ERIC Educational Resources Information Center
Bellera, Carine A.; Julien, Marilyse; Hanley, James A.
2010-01-01
The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…
Mroz, T A
1999-10-01
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
Baouche, S; Gamborg, G; Petrunin, V V; Luntz, A C; Baurichter, A; Hornekaer, L
2006-08-28
Highly energetic translational energy distributions are reported for hydrogen and deuterium molecules desorbing associatively from the atomic chemisorption states on highly oriented pyrolytic graphite (HOPG). Laser assisted associative desorption is used to measure the time of flight of molecules desorbing from a hydrogen (deuterium) saturated HOPG surface produced by atomic exposure from a thermal atom source at around 2100 K. The translational energy distributions normal to the surface are very broad, from approximately 0.5 to approximately 3 eV, with a peak at approximately 1.3 eV. The highest translational energy measured is close to the theoretically predicted barrier height. The angular distribution of the desorbing molecules is sharply peaked along the surface normal and is consistent with thermal broadening contributing to energy release parallel to the surface. All results are in qualitative agreement with recent density functional theory calculations suggesting a lowest energy para-type dimer recombination path.
Using Extreme Groups Strategy When Measures Are Not Normally Distributed.
ERIC Educational Resources Information Center
Fowler, Robert L.
1992-01-01
A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)
Estimating insect flight densities from attractive trap catches and flight height distributions
USDA-ARS?s Scientific Manuscript database
Insect species often exhibit a specific mean flight height and vertical flight distribution that approximates a normal distribution with a characteristic standard deviation (SD). Many studies in the literature report catches on passive (non-attractive) traps at several heights. These catches were us...
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
An asymptotic analysis of the logrank test.
Strawderman, R L
1997-01-01
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
van Albada, S J; Robinson, P A
2007-04-15
Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.
A Noncentral "t" Regression Model for Meta-Analysis
ERIC Educational Resources Information Center
Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi
2010-01-01
In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…
Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer
2016-01-01
Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.
Estimation of Item Parameters and the GEM Algorithm.
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
The models and procedures discussed in this paper are related to those presented in Bock and Aitkin (1981), where they considered the 2-parameter probit model and approximated a normally distributed prior distribution of abilities by a finite and discrete distribution. One purpose of this paper is to clarify the nature of the general EM (GEM)…
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
The transmembrane gradient of the dielectric constant influences the DPH lifetime distribution.
Konopásek, I; Kvasnicka, P; Amler, E; Kotyk, A; Curatola, G
1995-11-06
The fluorescence lifetime distribution of 1,6-diphenyl-1,3,5-hexatriene (DPH) and 1-[4-(trimethylamino)phenyl]-6-phenyl-1,3,5-hexatriene (TMA-DPH) in egg-phosphatidylcholine liposomes was measured in normal and heavy water. The lower dielectric constant (by approximately 12%) of heavy water compared with normal water was employed to provide direct evidence that the drop of the dielectric constant along the membrane normal shifts the centers of the distribution of both DPH and TMA-DPH to higher values and sharpens the widths of the distribution. The profile of the dielectric constant along the membrane normal was not found to be a linear gradient (in contrast to [1]) but a more complex function. Presence of cholesterol in liposomes further shifted the center of the distributions to higher value and sharpened them. In addition, it resulted in a more gradient-like profile of the dielectric constant (i.e. linearization) along the normal of the membrane. The effect of the change of dielectric constant on the membrane proteins is discussed.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Boltzmann-conserving classical dynamics in quantum time-correlation functions: "Matsubara dynamics".
Hele, Timothy J H; Willatt, Michael J; Muolo, Andrea; Althorpe, Stuart C
2015-04-07
We show that a single change in the derivation of the linearized semiclassical-initial value representation (LSC-IVR or "classical Wigner approximation") results in a classical dynamics which conserves the quantum Boltzmann distribution. We rederive the (standard) LSC-IVR approach by writing the (exact) quantum time-correlation function in terms of the normal modes of a free ring-polymer (i.e., a discrete imaginary-time Feynman path), taking the limit that the number of polymer beads N → ∞, such that the lowest normal-mode frequencies take their "Matsubara" values. The change we propose is to truncate the quantum Liouvillian, not explicitly in powers of ħ(2) at ħ(0) (which gives back the standard LSC-IVR approximation), but in the normal-mode derivatives corresponding to the lowest Matsubara frequencies. The resulting "Matsubara" dynamics is inherently classical (since all terms O(ħ(2)) disappear from the Matsubara Liouvillian in the limit N → ∞) and conserves the quantum Boltzmann distribution because the Matsubara Hamiltonian is symmetric with respect to imaginary-time translation. Numerical tests show that the Matsubara approximation to the quantum time-correlation function converges with respect to the number of modes and gives better agreement than LSC-IVR with the exact quantum result. Matsubara dynamics is too computationally expensive to be applied to complex systems, but its further approximation may lead to practical methods.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, D.E.
1977-04-01
A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
On the estimation of spread rate for a biological population
Jim Clark; Lajos Horváth; Mark Lewis
2001-01-01
We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
NASA Astrophysics Data System (ADS)
Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.
2017-04-01
The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
NASA Astrophysics Data System (ADS)
Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.
2018-05-01
Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.
The missing impact craters on Venus
NASA Technical Reports Server (NTRS)
Speidel, D. H.
1993-01-01
The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
New approach application of data transformation in mean centering of ratio spectra method
NASA Astrophysics Data System (ADS)
Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.
2015-05-01
Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Improving the chi-squared approximation for bivariate normal tolerance regions
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.
1993-01-01
Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.
Lin, Guoxing
2016-11-21
Anomalous diffusion exists widely in polymer and biological systems. Pulsed-field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in nuclear magnetic resonance and magnetic resonance imaging. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, the exact signal attenuation expression including the finite gradient pulse width effect has not been obtained based on fractional derivatives for PFG anomalous diffusion. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional derivative model, both the probability density function (PDF) of a spin in real space and the PDF of the spin's accumulating phase shift in virtual phase space are MLP distributions. The MLP phase distribution leads to a Mittag-Leffler function based PFG signal attenuation, which differs significantly from the exponential attenuation for normal diffusion and from the stretched exponential attenuation for fractional diffusion based on the fractal derivative model. A complete signal attenuation expression E α (-D f b α,β * ) including the finite gradient pulse width effect was obtained and it can handle all three types of PFG fractional diffusions. The result was also extended in a straightforward way to give a signal attenuation expression of fractional diffusion in PFG intramolecular multiple quantum coherence experiments, which has an n β dependence upon the order of coherence which is different from the familiar n 2 dependence in normal diffusion. The results obtained in this study are in agreement with the results from the literature. The results in this paper provide a set of new, convenient approximation formalisms to interpret complex PFG fractional diffusion experiments.
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shortis, M.; Johnston, G.
1997-11-01
In a previous paper, the results of photogrammetric measurements of a number of paraboloidal reflecting surfaces were presented. These results showed that photogrammetry can provide three-dimensional surface characterizations of such solar concentrators. The present paper describes the assessment of the quality of these surfaces as a derivation of the photogrammetrically produced surface coordinates. Statistical analysis of the z-coordinate distribution of errors indicates that these generally conform to a univariate Gaussian distribution, while the numerical assessment of the surface normal vectors on these surfaces indicates that the surface normal deviations appear to follow an approximately bivariate Gaussian distribution. Ray tracing ofmore » the measured surfaces to predict the expected flux distribution at the focal point of the 400 m{sup 2} dish show a close correlation with the videographically measured flux distribution at the focal point of the dish.« less
Distribution of curvature of 3D nonrotational surfaces approximating the corneal topography
NASA Astrophysics Data System (ADS)
Kasprzak, Henryk T.
1998-10-01
The first part of the paper presents the analytical curves used to approximate the corneal profile. Next, some definition of 3D surfaces curvature, like main normal sections, main radii of curvature and their orientations are given. The examples of four nonrotational 3D surfaces such as: ellipsoidal, surface based on hyperbolic cosine function, sphero-cylindrical and toroidal, approximating the corneal topography are proposed. The 3D surface and the contour plots of main radii of curvature and their orientation for four nonrotational approximation of the cornea are shown. Results of calculations are discussed from the point of view of videokeratometric images.
Prisk, G K; Guy, H J; Elliott, A R; Paiva, M; West, J B
1995-02-01
We used multiple-breath N2 washouts (MBNW) to study the inhomogeneity of ventilation in four normal humans (mean age 42.5 yr) before, during, and after 9 days of exposure to microgravity on Spacelab Life Sciences-1. Subjects performed 20-breath MBNW at tidal volumes of approximately 700 ml and 12-breath MBNW at tidal volumes of approximately 1,250 ml. Six indexes of ventilatory inhomogeneity were derived from data from 1) distribution of specific ventilation (SV) from mixed-expired and 2) end-tidal N2, 3) change of slope of N2 washout (semilog plot) with time, 4) change of slope of normalized phase III of successive breaths, 5) anatomic dead space, and 6) Bohr dead space. Significant ventilatory inhomogeneity was seen in the standing position at normal gravity (1 G). When we compared standing 1 G with microgravity, the distributions of SV became slightly narrower, but the difference was not significant. Also, there were no significant changes in the change of slope of the N2 washout, change of normalized phase III slopes, or the anatomic and Bohr dead spaces. By contrast, transition from the standing to supine position in 1 G resulted in significantly broader distributions of SV (P < 0.05) and significantly greater changes in the changes in slope of the N2 washouts (P < 0.001), indicating more ventilatory inhomogeneity in that posture. Thus these techniques can detect relatively small changes in ventilatory inhomogeneity. We conclude that the primary determinants of ventilatory inhomogeneity during tidal breathing in the upright posture are not gravitational in origin.
NASA Technical Reports Server (NTRS)
Prisk, G. Kim; Guy, Harold J. B.; Elliott, Ann R.; Paiva, Manuel; West, John B.
1995-01-01
We used multiple-breath N2 washouts (MBNW) to study the homogeneity of ventilation in four normal humans (mean age 42.5 yr) before, during, and after 9 days of exposure to microgravity on Spacelab Life Sciences-1. Subjects performed 20-breath MBNW at tidal volumes of approximately 700 ml and 12-breath MBNW at tidal volumes of approximately 1,250 ml. Six indexes of ventilatory inhomogeneity were derived from data from (1) distribution of specific ventilation (SV) from mixed-expired and (2) end-tidal N2, (3) change of slope of N2 washout (semilog plot) with time, (4) change of slope of normalized phase III of successive breaths, (5) anatomic lead dead space, and (6) Bohr dead space. Significant ventilatory inhomogeneity was seen in the standing position at normal gravity (1 G). When we compared standing 1 G with microgravity, the distributions of SV became slightly narrower, but the difference was not significant. Also, there were no significant changes in the change of slope of the N2 washout, change of normalized phase III slopes, or the anatomic and Bohr dead spaces. By contrast, transition from the standing to supine position in 1 G resulted in significantly broader distributions of SV and significantly greater changes in the changes in slope of the N2 washouts, indicating more ventilatory inhomogeneity in that posture. Thus these techniques can detect relatively small changes in ventilatory inhomogeneity. We conclude that the primary determinants of ventilatory inhomogeneity during tidal breathing in the upright posture are not gravitational in origin.
Huijing, P A; van Lookeren Campagne, A A; Koper, J F
1989-01-01
Rat gastrocnemius medialis (GM) and semimembranosus (SM) muscles have a very different morphology. GM is a very pennate muscle, combining relatively short muscle fibre length with sizable fibre angles and long muscle and aponeurosis lengths. SM is a more parallel-fibred muscle, combining a relatively long fibre length with a small fibre angle and short aponeurosis length. The mechanisms of fibre shortening as well as angle increase are operational in GM as well as SM. However, as a consequence of isometric contraction, changes of fibre length and angle are greater for GM than for SM at any relative muscle length. These differences are particularly notable at short muscle lengths: at 80% of optimum muscle length, fibre length changes of approximately 30% are coupled to fibre angle changes of 15 degrees in GM, while for SM these changes are 4% and 0.6 degrees, respectively. A considerable difference was found for normalized active slack muscle length (GM approximately 80 and SM approximately 45%). This is explained by differences of degree of pennation as well as factors related to differences found for estimated fibre length-force characteristics. Estimated normalized active fibre slack length was considerably smaller for SM than for GM (approximately 40 and 60%, respectively). The most likely explanation of these findings are differences of distribution of optimum fibre lengths, possibly in combination with differences of myofilament lengths and/or fibre length distributions.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
NASA Technical Reports Server (NTRS)
Harris, C. D.
1974-01-01
Refinements in a 10 percent thick supercritical airfoil produced improvements in the overall drag characteristics at normal force coefficients from about 0.30 to 0.65 compared with earlier supercritical airfoils which were developed for a normal force coefficient of 0.7. The drag divergence Mach number of the improved supercritical airfoil (airfoil 26a) varied from approximately 0.82 at a normal force coefficient to of 0.30, to 0.78 at a normal force coefficient of 0.80 with no drag creep evident. Integrated section force and moment data, surface pressure distributions, and typical wake survey profiles are presented.
Calculations of lattice vibrational mode lifetimes using Jazz: a Python wrapper for LAMMPS
NASA Astrophysics Data System (ADS)
Gao, Y.; Wang, H.; Daw, M. S.
2015-06-01
Jazz is a new python wrapper for LAMMPS [1], implemented to calculate the lifetimes of vibrational normal modes based on forces as calculated for any interatomic potential available in that package. The anharmonic character of the normal modes is analyzed via the Monte Carlo-based moments approximation as is described in Gao and Daw [2]. It is distributed as open-source software and can be downloaded from the website http://jazz.sourceforge.net/.
Banerjee, Abhirup; Maji, Pradipta
2015-12-01
The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.
Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D
2005-09-07
The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.
Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo
2011-01-01
Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
Simulations of large acoustic scintillations in the straits of Florida.
Tang, Xin; Tappert, F D; Creamer, Dennis B
2006-12-01
Using a full-wave acoustic model, Monte Carlo numerical studies of intensity fluctuations in a realistic shallow water environment that simulates the Straits of Florida, including internal wave fluctuations and bottom roughness, have been performed. Results show that the sound intensity at distant receivers scintillates dramatically. The acoustic scintillation index SI increases rapidly with propagation range and is significantly greater than unity at ranges beyond about 10 km. This result supports a theoretical prediction by one of the authors. Statistical analyses show that the distribution of intensity of the random wave field saturates to the expected Rayleigh distribution with SI= 1 at short range due to multipath interference effects, and then SI continues to increase to large values. This effect, which is denoted supersaturation, is universal at long ranges in waveguides having lossy boundaries (where there is differential mode attenuation). The intensity distribution approaches a log-normal distribution to an excellent approximation; it may not be a universal distribution and comparison is also made to a K distribution. The long tails of the log-normal distribution cause "acoustic intermittency" in which very high, but rare, intensities occur.
Angular and velocity distributions of tungsten sputtered by low energy argon ions
NASA Astrophysics Data System (ADS)
Marenkov, E.; Nordlund, K.; Sorokin, I.; Eksaeva, A.; Gutorov, K.; Jussila, J.; Granberg, F.; Borodin, D.
2017-12-01
Sputtering by ions with low near-threshold energies is investigated. Experiments and simulations are conducted for tungsten sputtering by low-energy, 85-200 eV Ar atoms. The angular distributions of sputtered particles are measured. A new method for molecular dynamics simulation of sputtering taking into account random crystallographic surface orientation is developed, and applied for the case under consideration. The simulations approximate experimental results well. At low energies the distributions acquire "butterfly-like" shape with lower sputtering yields for close to normal angles comparing to the cosine distribution. The energy distributions of sputtered particles were simulated. The Thompson distribution remains valid down to near-threshold 85 eV case.
A New Distribution Family for Microarray Data †
Kelmansky, Diana Mabel; Ricci, Lila
2017-01-01
The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative standpoint taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. R codes are available from the authors upon request. PMID:28208652
A New Distribution Family for Microarray Data.
Kelmansky, Diana Mabel; Ricci, Lila
2017-02-10
The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative stand point taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. Rcodes are available from the authors upon request.
Small-Scale Dayside Magnetic Reconnection Analysis via MMS
NASA Astrophysics Data System (ADS)
Pritchard, K. R.; Burch, J. L.; Fuselier, S. A.; Webster, J.; Genestreti, K.; Torbert, R. B.; Rager, A. C.; Phan, T.; Argall, M. R.; Le Contel, O.; Russell, C. T.; Strangeway, R. J.; Giles, B. L.
2017-12-01
The Magnetospheric Multiscale (MMS) mission has the primary objective of understanding the physics of the reconnection electron diffusion region (EDR), where magnetic energy is transformed into particle energy. In this poster, we present data from an EDR encounter that occurred in late December 2016 at approximately 11:00 MLT with a moderate guide field. The spacecraft were in a tetrahedral formation with an average inter-spacecraft distance of approximately 7 kilometers. During this event electron crescent-shaped distributions were observed in the electron stagnation region as is typical for asymmetric reconnection. Based on the observed ion velocity jets, the spacecraft traveled just south of the EDR. Because of the close spacecraft separation, fairly accurate computation of the Hall, electron pressure divergence, and electron inertia components of the reconnection electric field could be made. In the region of the crescent distributions good agreement was observed, with the strongest component being the normal electric field and the most significant sources being electron pressure divergence and the Hall electric field. While the strongest currents were in the out-of-plane direction, the dissipation was strongest in the normal direction because of the larger magnitude of the normal electric field component. These results are discussed in light of recent 3D PIC simulations performed by other groups.
Optical and Nanoparticle Analysis of Normal and Cancer Cells by Light Transmission Spectroscopy
NASA Astrophysics Data System (ADS)
Deatsch, Alison; Sun, Nan; Johnson, Jeffery; Stack, Sharon; Szajko, John; Sander, Christopher; Rebuyon, Roland; Easton, Judah; Tanner, Carol; Ruggiero, Steven
2015-03-01
We have investigated the optical properties of human oral and ovarian cancer and normal cells. Specifically, we have measured the absolute optical extinction for intra-cellular material (lysates) in aqueous suspension. Measurements were conducted over a wavelength range of 250 to 1000 nm with 1 nm resolution using Light Transmission Spectroscopy (LTS). This provides both the absolute extinction of materials under study and, with Mie inversion, the absolute number of particles of a given diameter as a function of diameter in the range of 1 to 3000 nm. Our preliminary studies show significant differences in both the extinction and particle size distributions associated with cancer versus normal cells, which appear to be correlated with differences in the particle size distribution in the range of approximately 50 to 250 nm. Especially significant is a clearly higher density of particles at about 100 nm and smaller for normal cells. Department of Physics, Harper Cancer Research Institute, and the Office of Research at the University of Notre Dame.
Sakuraba, Kazuko; Hayashi, Nobukazu; Kawashima, Makoto; Imokawa, Genji
2004-08-01
In pigmented basal cell epithelioma (BCE), there seems to be an abnormal transfer of melanized melanosomes from proliferating melanocytes to basaloid tumor cells. In this study, the interruption of that melanosome transfer was studied with special respect to the altered function of a phagocytic receptor, protease-activated receptor (PAR)-2 in the basaloid tumor cells. We used electron microscopy to clarify the disrupted transfer at the ultrastructural level and then performed immunohistochemistry and reverse transcription-polymerase chain reaction (RT-PCR) to examine the regulation of a phagocytic receptor, PAR-2, expressed on basaloid tumor cells. Electron microscopic analysis revealed that basaloid tumor cells of pigmented BCE have a significantly lower population of melanosomes ( approximately 16.4%) than do normal keratinocytes located in the perilesional normal epidermis ( approximately 91.0%). In contrast, in pigmented seborrheic keratosis (SK), a similarly pigmented epidermal tumor, the distribution of melanin granules does not differ between the lesional ( approximately 93.9%) and the perilesional normal epidermis ( approximately 92.2 %), indicating that interrupted melanosome transfer occurs in BCE but not in all pigmented epithelial tumors. RT-PCR analysis demonstrated that the expression of PAR-2 mRNA transcripts in basaloid cells is significantly decreased in pigmented BCE compared with the perilesional normal epidermis. In contrast, in pigmented SK, where melanosome transfer to basaloid tumor cells is not interrupted, the expression of PAR-2 mRNA transcripts is comparable between the basaloid tumor cells and the perilesional normal epidermis. Immunohistochemistry demonstrated that basaloid cells in pigmented BCE have less immunostaining for PAR-2 than do keratinocytes in the perilesional normal epidermis whereas in pigmented SK, there is no difference in immunostaining for PAR-2 between the basaloid tumor and the perilesional normal epidermis. These findings suggest that the decreased expression of PAR-2 in the basaloid cells is associated in part with the observed interruption of melanosome transfer in pigmented BCE.
Optical clock distribution in supercomputers using polyimide-based waveguides
NASA Astrophysics Data System (ADS)
Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.
1999-04-01
Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
NASA Astrophysics Data System (ADS)
Al-Hawat, Sh; Naddaf, M.
2005-04-01
The electron energy distribution function (EEDF) was determined from the second derivative of the I-V Langmuir probe characteristics and, thereafter, theoretically calculated by solving the plasma kinetic equation, using the black wall (BW) approximation, in the positive column of a neon glow discharge. The pressure has been varied from 0.5 to 4 Torr and the current from 10 to 30 mA. The measured electron temperature, density and electric field strength were used as input data for solving the kinetic equation. Comparisons were made between the EEDFs obtained from experiment, the BW approach, the Maxwellian distribution and the Rutcher solution of the kinetic equation in the elastic energy range. The best conditions for the BW approach are found to be under the discharge conditions: current density jd = 4.45 mA cm-2 and normalized electric field strength E/p = 1.88 V cm-1 Torr-1.
Architectures of Kepler Planet Systems with Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Morehead, Robert C.; Ford, Eric B.
2015-12-01
The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.
Prediction of Mean and Design Fatigue Lives of Self Compacting Concrete Beams in Flexure
NASA Astrophysics Data System (ADS)
Goel, S.; Singh, S. P.; Singh, P.; Kaushik, S. K.
2012-02-01
In this paper, result of an investigation conducted to study the flexural fatigue characteristics of self compacting concrete (SCC) beams in flexure are presented. An experimental programme was planned in which approximately 60 SCC beam specimens of size 100 × 100 × 500 mm were tested under flexural fatigue loading. Approximately 45 static flexural tests were also conducted to facilitate fatigue testing. The flexural fatigue and static flexural strength tests were conducted on a 100 kN servo-controlled actuator. The fatigue life data thus obtained have been used to establish the probability distributions of fatigue life of SCC using two-parameter Weibull distribution. The parameters of the Weibull distribution have been obtained by different methods of analysis. Using the distribution parameters, the mean and design fatigue lives of SCC have been estimated and compared with Normally vibrated concrete (NVC), the data for which have been taken from literature. It has been observed that SCC exhibits higher mean and design fatigue lives compared to NVC.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1978-01-01
Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.
Proton Straggling in Thick Silicon Detectors
NASA Technical Reports Server (NTRS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.
2017-01-01
Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, D.R.; Kraml, M.; Cayen, M.N.
The kinetics of tolrestat, a potent inhibitor of aldose reductase, were examined. Serum concentrations of tolrestat and of total /sup 14/C were measured after dosing normal subjects and subjects with diabetes with /sup 14/C-labeled tolrestat. In normal subjects, tolrestat was rapidly absorbed and disappearance from serum was biphasic. Distribution and elimination t 1/2s were approximately 2 and 10 to 12 hr, respectively, after single and multiple doses. Unchanged tolrestat accounted for the major portion of /sup 14/C in serum. Radioactivity was rapidly and completely excreted in urine and feces in an approximate ratio of 2:1. Findings were much the samemore » in subjects with diabetes. In normal subjects, the kinetics of oral tolrestat were independent of dose in the 10 to 800 mg range. Repetitive dosing did not result in unexpected cumulation. Tolrestat was more than 99% bound to serum protein; it did not compete with warfarin for binding sites but was displaced to some extent by high concentrations of tolbutamide or salicylate.« less
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
A diffusion approximation for ocean wave scatterings by randomly distributed ice floes
NASA Astrophysics Data System (ADS)
Zhao, Xin; Shen, Hayley
2016-11-01
This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.
Ju, Daeyoung; Young, Thomas M.; Ginn, Timothy R.
2012-01-01
An innovative method is proposed for approximation of the set of radial diffusion equations governing mass exchange between aqueous bulk phase and intra-particle phase for a hetero-disperse mixture of particles such as occur in suspension in surface water, in riverine/estuarine sediment beds, in soils and in aquifer materials. For this purpose the temporal variation of concentration at several uniformly distributed points within a normalized representative particle with spherical, cylindrical or planar shape is fitted with a 2-domain linear reversible mass exchange model. The approximation method is then superposed in order to generalize the model to a hetero-disperse mixture of particles. The method can reduce the computational effort needed in solving the intra-particle mass exchange of a hetero-disperse mixture of particles significantly and also the error due to the approximation is shown to be relatively small. The method is applied to describe desorption batch experiment of 1,2-Dichlorobenzene from four different soils with known particle size distributions and it could produce good agreement with experimental data. PMID:18304692
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
1978-07-01
For NTIS GRA&I DTIC TAB >0 Unannounced [D Just ification- D T C ELECTE By Distribution/ NOV 20 1981 Avail and/orS Availabilit CodesD=-Dist Spca D NO...Hutton, Engineering Geologist. Impoundment of water began in 1970. h. Normal Operating Procedure. Normal rainfall, runoff, transpir- ation, and...evaporation all combine to maintain a relatively stable water surface elevation. 1.3 PERTINENT DATA a. Drainage Area - 9,900 acres of which approximately 15
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Imaging elemental distribution and ion transport in cultured cells with ion microscopy.
Chandra, S; Morrison, G H
1985-06-28
Both elemental distribution and ion transport in cultured cells have been imaged by ion microscopy. Morphological and chemical information was obtained with a spatial resolution of approximately 0.5 micron for sodium, potassium, calcium, and magnesium in freeze-fixed, cryofractured, and freeze-dried normal rat kidney cells and Chinese hamster ovary cells. Ion transport was successfully demonstrated by imaging Na+-K+ fluxes after the inhibition of Na+- and K+ -dependent adenosine triphosphatase with ouabain. This method allows measurements of elemental (isotopic) distribution to be related to cell morphology, thereby providing the means for studying ion distribution and ion transport under different physiological, pathological, and toxicological conditions in cell culture systems.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Decorin and biglycan of normal and pathologic human corneas
NASA Technical Reports Server (NTRS)
Funderburgh, J. L.; Hevelone, N. D.; Roth, M. R.; Funderburgh, M. L.; Rodrigues, M. R.; Nirankari, V. S.; Conrad, G. W.
1998-01-01
PURPOSE: Corneas with scars and certain chronic pathologic conditions contain highly sulfated dermatan sulfate, but little is known of the core proteins that carry these atypical glycosaminoglycans. In this study the proteoglycan proteins attached to dermatan sulfate in normal and pathologic human corneas were examined to identify primary genes involved in the pathobiology of corneal scarring. METHODS: Proteoglycans from human corneas with chronic edema, bullous keratopathy, and keratoconus and from normal corneas were analyzed using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), quantitative immunoblotting, and immunohistology with peptide antibodies to decorin and biglycan. RESULTS: Proteoglycans from pathologic corneas exhibit increased size heterogeneity and binding of the cationic dye alcian blue compared with those in normal corneas. Decorin and biglycan extracted from normal and diseased corneas exhibited similar molecular size distribution patterns. In approximately half of the pathologic corneas, the level of biglycan was elevated an average of seven times above normal, and decorin was elevated approximately three times above normal. The increases were associated with highly charged molecular forms of decorin and biglycan, indicating modification of the proteins with dermatan sulfate chains of increased sulfation. Immunostaining of corneal sections showed an abnormal stromal localization of biglycan in pathologic corneas. CONCLUSIONS: The increased dermatan sulfate associated with chronic corneal pathologic conditions results from stromal accumulation of decorin and particularly of biglycan in the affected corneas. These proteins bear dermatan sulfate chains with increased sulfation compared with normal stromal proteoglycans.
[Low-frequency vibrations of a Mg pyropheophorbide-histidine complex].
Klevanic, A V; Shuvalov, V A
2001-01-01
The spectrum of vibrations and normal model for the Mg piropheophorbide-histidine complex was calculated using the MNDO-PM3 (MOPAC) semiempirical quantum chemical method. The delocalization index and the distribution function were introduced to describe the shape of normal vibrations. The greatest part (approximately 65%) of the low-frequency vibrations (1-400 cm-1) was shown to delocalize over both the His and Mg piropheophorbide molecules. Leu, Met, and Asp were also studied as the fifth ligand to the Mg piropheophorbide molecule. It is concluded that the fifth amino acid ligand to porphyrin molecules causes marked geometrical distortions in porphyrin, and induces a new, compared to four coordinated pigment, spectrum of normal modes.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Dieterich, J.H.; Kilgore, B.D.
1996-01-01
A procedure has been developed to obtain microscope images of regions of contact between roughened surfaces of transparent materials, while the surfaces are subjected to static loads or undergoing frictional slip. Static loading experiments with quartz, calcite, soda-lime glass and acrylic plastic at normal stresses to 30 MPa yield power law distributions of contact areas from the smallest contacts that can be resolved (3.5 ??m2) up to a limiting size that correlates with the grain size of the abrasive grit used to roughen the surfaces. In each material, increasing normal stress results in a roughly linear increase of the real area of contact. Mechanisms of contact area increase are by growth of existing contacts, coalescence of contacts and appearance of new contacts. Mean contacts stresses are consistent with the indentation strength of each material. Contact size distributions are insensitive to normal stress indicating that the increase of contact area is approximately self-similar. The contact images and contact distributions are modeled using simulations of surfaces with random fractal topographies. The contact process for model fractal surfaces is represented by the simple expedient of removing material at regions where surface irregularities overlap. Synthetic contact images created by this approach reproduce observed characteristics of the contacts and demonstrate that the exponent in the power law distributions depends on the scaling exponent used to generate the surface topography.
Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe
2013-08-01
Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
NOx profile around a signalized intersection of busy roadway
NASA Astrophysics Data System (ADS)
Kim, Kyung Hwan; Lee, Seung-Bok; Woo, Sung Ho; Bae, Gwi-Nam
2014-11-01
The NOx pollution profile around a signalized intersection of a busy roadway was investigated to understand the effect of traffic control on urban air pollution. Traffic flow patterns were classified into three categories of quasi-cruising, a combination of deceleration and acceleration, and a combination of deceleration, idling, and acceleration. The spatial distribution of air pollution levels around an intersection could be represented as a quasi-normal distribution, whose peak height was aggravated by increased emissions due to transient driving patterns. The peak concentration of NOx around the signalized intersection for the deceleration, idling, and acceleration category was five times higher than that for the quasi-cruising category. Severe levels of NOx pollution tailed off approximately 400 m from the center of the intersection. Approximately 200-1000 ppb of additional NOx was observed when traffic was decelerating, idling, and accelerating within the intersection zone, resulting in high exposure levels for pedestrians around the intersection. We propose a fluctuating horizontal distribution of motor vehicle-induced air pollutants as a function of time.
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Graves, R. A., Jr.
1973-01-01
A method for the rapid calculation of the inviscid shock layer about blunt axisymmetric bodies at an angle of attack of 0 deg has been developed. The procedure is of an inverse nature, that is, a shock wave is assumed and calculations proceed along rays normal to the shock. The solution is iterated until the given body is computed. The flow field solution procedure is programed at the Langley Research Center for the Control Data 6600 computer. The geometries specified in the program are sphores, ellipsoids, paraboloids, and hyperboloids which may conical afterbodies. The normal momentum equation is replaced with an approximate algebraic expression. This simplification significantly reduces machine computation time. Comparisons of the present results with shock shapes and surface pressure distributions obtained by the more exact methods indicate that the program provides reasonably accurate results for smooth bodies in axisymmetric flow. However, further research is required to establish the proper approximate form of the normal momentum equation for the two-dimensional case.
Baldi, Pierre
2010-01-01
As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577
USDA-ARS?s Scientific Manuscript database
The greater white-toothed shrew (Crocidura russula) is an invasive mammalian species that was first recorded in Ireland in 2007. It currently occupies an area of approximately 7,600 km2 on the island. C. russula is normally distributed in Northern Africa and Western Europe, and was previously absent...
The Italian primary school-size distribution and the city-size: a complex nexus
NASA Astrophysics Data System (ADS)
Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.
2014-06-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Theory of the intermediate stage of crystal growth with applications to insulin crystallization
NASA Astrophysics Data System (ADS)
Barlow, D. A.
2017-07-01
A theory for the intermediate stage of crystal growth, where two defining equations one for population continuity and another for mass-balance, is used to study the kinetics of the supersaturation decay, the homogeneous nucleation rate, the linear growth rate and the final distribution of crystal sizes for the crystallization of bovine and porcine insulin from solution. The cited experimental reports suggest that the crystal linear growth rate is directly proportional to the square of the insulin concentration in solution for bovine insulin and to the cube of concentration for porcine. In a previous work, it was shown that the above mentioned system could be solved for the case where the growth rate is directly proportional to the normalized supersaturation. Here a more general solution is presented valid for cases where the growth rate is directly proportional to the normalized supersaturation raised to the power of any positive integer. The resulting expressions for the time dependent normalized supersaturation and crystal size distribution are compared with experimental reports for insulin crystallization. An approximation for the maximum crystal size at the end of the intermediate stage is derived. The results suggest that the largest crystal size in the distribution at the end of the intermediate stage is maximized when nucleation is restricted to be only homogeneous. Further, the largest size in the final distribution depends only weakly upon the initial supersaturation.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
Asymptotic Normality of Poly-T Densities with Bayesian Applications.
1987-10-01
be extended to the case of many t-like factors in a straightforward manner. Obviously, the computational complexity will increase rapidly as the number...York: Marcel-Dekker. Broemeling, L.D. and Abdullah, M.Y. (1984). An approximation to the poly-t distribution. Communciations in Statistics A,11, 1407...Street Center Champaign, IL 61820 Austin, TX 78703 Dr. Steven Hunks Dr. James Krantz Department of Education Computer -based Education University of
Improvement of Reynolds-Stress and Triple-Product Lag Models
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Lillard, Randolph P.
2017-01-01
The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.
Some blood chemistry values for the Rainbow Trout (Salmo gairdneri)
Wedemeyer, Gary; Chatterton, K.
1970-01-01
Normal distribution curves were graphically fitted to approximately 1400 clinical test values obtained from the plasma or kidney tissue of more than 200 yearling rainbow trout (Salmo gairdneri). Estimated normal ranges were ascorbate, 102–214 μg/g; blood urea nitrogen (BUN), 0.9–4.5 mg/100 ml; chloride, 84–132 mEq/liter; cholesterol, 161–365 mg/100 ml; cortisol, 1.5–18.5 μg/100 ml; glucose, 41–151 mg/100 ml; and total protein, 2–6 g/100 ml.
Duffaut Espinosa, L A; Posadas, A N; Carbajal, M; Quiroz, R
2017-01-01
In this paper, a multifractal downscaling technique is applied to adequately transformed and lag corrected normalized difference vegetation index (NDVI) in order to obtain daily estimates of rainfall in an area of the Peruvian Andean high plateau. This downscaling procedure is temporal in nature since the original NDVI information is provided at an irregular temporal sampling period between 8 and 11 days, and the desired final scale is 1 day. The spatial resolution of approximately 1 km remains the same throughout the downscaling process. The results were validated against on-site measurements of meteorological stations distributed in the area under study.
Posadas, A. N.; Carbajal, M.; Quiroz, R.
2017-01-01
In this paper, a multifractal downscaling technique is applied to adequately transformed and lag corrected normalized difference vegetation index (NDVI) in order to obtain daily estimates of rainfall in an area of the Peruvian Andean high plateau. This downscaling procedure is temporal in nature since the original NDVI information is provided at an irregular temporal sampling period between 8 and 11 days, and the desired final scale is 1 day. The spatial resolution of approximately 1 km remains the same throughout the downscaling process. The results were validated against on-site measurements of meteorological stations distributed in the area under study. PMID:28125607
Gestational age estimates from singleton births conceived using assisted reproductive technology.
Callaghan, William M; Schieve, Laura A; Dietz, Patricia M
2007-09-01
Information on gestational age for public health research and surveillance in the US is usually obtained from vital records and is primarily based on the first day of the woman's last menstrual period (LMP). However, using LMP as a marker of conception is subject to a variety of errors and results in misclassification of gestational age. Pregnancies conceived through assisted reproductive technology (ART) are unique in that the estimates of gestational age are not based on the LMP, but on the date when fertilisation actually occurred, and thus most gestational age errors are likely to be due to errors introduced in recording and data entry. The purpose of this paper was to examine the birthweight distribution by gestational age for ART singleton livebirths reported to a national ART surveillance system. Gestational age was categorised as 20-27, 28-31, 32-36 and 37-44 weeks; birthweight distributions were plotted for each category. The distributions of very-low-birthweight (VLBW; <1500 g), moderately low-birthweight (1500-2499 g) and normal-birthweight infants for each gestational week were examined. At both 20-27 and 28-31 weeks, there was an extended right tail to the distribution and a small second mode. At 32-36 weeks, there were long tails in either direction and at 37-44 weeks, an extended tail to the left. There was a high proportion of VLBW infants at low gestational ages and a decreasing proportion of VLBW infants with increasing gestational age. However, there was also a fairly constant proportion of normal-birthweight infants at every gestational age below 34 weeks, which suggested misclassification of gestational age. Approximately 12% of ART births classified as 28-31 weeks' gestation had a birthweight in the second mode of the birthweight distribution compared with approximately 29% in national vital statistics data. Even when the birthweight and dates of conception and birth are known, questions remain regarding the residual amount of misclassification and the true nature of the birthweight distributions.
The Stress-Dependent Activation Parameters for Dislocation Nucleation in Molybdenum Nanoparticles.
Chachamovitz, Doron; Mordehai, Dan
2018-03-02
Many specimens at the nanoscale are pristine of dislocations, line defects which are the main carriers of plasticity. As a result, they exhibit extremely high strengths which are dislocation-nucleation controlled. Since nucleation is a thermally activated process, it is essential to quantify the stress-dependent activation parameters for dislocation nucleation in order to study the strength of specimens at the nanoscale and its distribution. In this work, we calculate the strength of Mo nanoparticles in molecular dynamics simulations and we propose a method to extract the activation free-energy barrier for dislocation nucleation from the distribution of the results. We show that by deforming the nanoparticles at a constant strain rate, their strength distribution can be approximated by a normal distribution, from which the activation volumes at different stresses and temperatures are calculated directly. We found that the activation energy dependency on the stress near spontaneous nucleation conditions obeys a power-law with a critical exponent of approximately 3/2, which is in accordance with critical exponents found in other thermally activated processes but never for dislocation nucleation. Additionally, significant activation entropies were calculated. Finally, we generalize the approach to calculate the activation parameters for other driving-force dependent thermally activated processes.
Growth hormone receptor deficiency (Laron syndrome): clinical and genetic characteristics.
Guevara-Aguirre, J; Rosenbloom, A L; Vaccarello, M A; Fielder, P J; de la Vega, A; Diamond, F B; Rosenfeld, R G
1991-01-01
Approximately 60 cases of GHRD (Laron syndrome) were reported before 1990 and half of these were from Israel. We have described 47 additional patients from an inbred population of South Ecuador and have emphasized certain clinical features including: markedly advanced osseous maturation for height age; normal body proportions in childhood but child-like proportions in adults; much greater deviation of stature than head size, giving an appearance of large cranium and small facies; underweight in childhood despite the appearance of obesity and true obesity in adulthood; blue scleras; and limited elbow extension. The Ecuadorean patients differed markedly and most importantly from the other large concentration, in Israel, by being of normal or superior intelligence, suggesting a unique linkage in the Ecuadorean population. The Ecuadorean population also differed in that those patients coming from Loja province had a markedly skewed sex ratio (19 females: 2 males), while those from El Oro province had a normal sex distribution (14 females: 12 males). The phenotypic similarity between the El Oro and Loja patients indicates that this abnormal sex distribution is not a direct result of the GHRD.
Dem Generation with WORLDVIEW-2 Images
NASA Astrophysics Data System (ADS)
Büyüksalih, G.; Baz, I.; Alkan, M.; Jacobsen, K.
2012-07-01
For planning purposes 42 km coast line of the Black Sea, starting at the Bosporus going in West direction, with a width of approximately 5 km, was imaged by WorldView-2. Three stereo scenes have been oriented at first by 3D-affine transformation and later by bias corrected RPC solution. The result is nearly the same, but it is limited by identification of the control points in the images. Nevertheless after blunder elimination by data snooping root mean square discrepancies below 1 pixel have been reached. The root mean square discrepancy at control point height reached 0.5 m up to 1.3 m with a base to height relation between 1:1.26 and 1:1.80. Digital Surface models (DSM) with 4 m spacing have been generated by least squares matching with region growing, supported by image pyramids. A higher percentage of the mountainous area is covered by forest, requiring the approximation based on image pyramids. In the forest area the approximation just by region growing leads to larger gaps in the DSM. Caused by the good image quality of WorldView-2 the correlation coefficients reached by least squares matching are high and even in most forest areas a satisfying density of accepted points was reached. Two stereo models have an overlapping area of 1.6 km times 6.7 km allowing an accuracy evaluation. Small, but nevertheless significant differences in scene orientation have been eliminated by least squares shift of both overlapping height models to each other. The root mean square differences of both independent DSM are 1.06m or as a function of terrain inclination 0.74 m + 0.55 m tangent (slope). The terrain inclination in the average is 7° with 12% exceeding 17°. The frequency distribution of height discrepancies is not far away from normal distribution, but as usual, larger discrepancies are more often available as corresponding to normal distribution. This also can be seen by the normalized medium absolute deviation (NMAS) related to 68% probability level of 0.83m being significant smaller as the root mean square differences. Nevertheless the results indicate a standard deviation of the single height models of 0.75 m or 0.52 m + 0.39* tangent (slope), corresponding to approximately 0.6 pixels for the x-parallax in flat terrain, being very satisfying for the available land cover. An interpolation over 10 m enlarged the root mean square differences of both height models nearly by 50%.
Log-amplitude statistics for Beck-Cohen superstatistics
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Konno, Hidetoshi
2013-05-01
As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.
Non-laser-based scanner for three-dimensional digitization of historical artifacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Daniel V.; Baldwin, Kevin C.; Duncan, Donald D
2007-05-20
A 3D scanner, based on incoherent illumination techniques, and associated data-processing algorithms are presented that can be used to scan objects at lateral resolutions ranging from 5 to100 {mu}m (or more) and depth resolutions of approximately 2 {mu}m.The scanner was designed with the specific intent to scan cuneiform tablets but can be utilized for other applications. Photometric stereo techniques are used to obtain both a surface normal map and a parameterized model of the object's bidirectional reflectance distribution function. The normal map is combined with height information,gathered by structured light techniques, to form a consistent 3D surface. Data from Lambertianmore » and specularly diffuse spherical objects are presented and used to quantify the accuracy of the techniques. Scans of a cuneiform tablet are also presented. All presented data are at a lateral resolution of 26.8 {mu}m as this is approximately the minimum resolution deemed necessary to accurately represent cuneiform.« less
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
NASA Technical Reports Server (NTRS)
Tyson, R. W.; Muraca, R. J.
1975-01-01
The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.
Technical Reports Prepared Under Contract N00014-76-C-0475.
1987-05-29
264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
[Natural selection associated with color vision defects in some population groups of Eurasia].
Evsiukov, A N
2014-01-01
Fitness coefficients and other quantitative parameters of selection associated with the generalized color blindness gene CB+ were obtained for three ethnogeographic population groups, including Belarusians from Belarus, ethnic populations of the Volga-Ural region, and ethnic populations of Siberia and the Far East of Russia. All abnormalities encoded by the OPN1LW and OPN1MW loci were treated as deviations from normal color perception. Coefficients were estimated from an approximation of the observed CB+ frequency distributions to the theoretical stationary distribution for the Wright island model. This model takes into account the pressure of migrations, selection, and random genetic drift, while the selection parameters are represented in the form of the distribution parameters. In the populations of Siberia and Far East, directional selection in favor of normal color vision and the corresponding allele CB- was observed. In the Belarusian and ethnic populations of the Volga-Ural region, stabilizing selection was observed. The selection intensity constituted 0.03 in the Belarusian; 0.22 in the ethnic populations of the Volga-Ural region; and 0.24 in ethnic populations of Siberia and Far East.
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Gebo, Norman; Rowland, John
1988-01-01
In this effort are described cumulative rain rate distributions for a network of nine tipping bucket rain gauge systems located in the mid-Atlantic coast region in the vicinity of the NASA Wallops Flight Facility, Wallops Island, Virginia. The rain gauges are situated within a gridded region of dimensions of 47 km east-west by 70 km north-south. Distributions are presented for the individual site measurements and the network average for the year period June 1, 1986 through May 31, 1987. A previous six year average distribution derived from measurements at one of the site locations is also presented. Comparisons are given of the network average, the CCIR (International Radio Consultative Committee) climatic zone, and the CCIR functional model distributions, the latter of which approximates a log normal at the lower rain rate and a gamma function at the higher rates.
Biological monitoring of environmental quality: The use of developmental instability
Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.
1994-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Power laws in microrheology experiments on living cells: Comparative analysis and modeling.
Balland, Martial; Desprat, Nicolas; Icard, Delphine; Féréol, Sophie; Asnacios, Atef; Browaeys, Julien; Hénon, Sylvie; Gallet, François
2006-08-01
We compare and synthesize the results of two microrheological experiments on the cytoskeleton of single cells. In the first one, the creep function J(t) of a cell stretched between two glass plates is measured after applying a constant force step. In the second one, a microbead specifically bound to transmembrane receptors is driven by an oscillating optical trap, and the viscoelastic coefficient Ge(omega) is retrieved. Both J(t) and Ge(omega) exhibit power law behaviors: J(t) = A0(t/t0)alpha and absolute value (Ge(omega)) = G0(omega/omega0)alpha, with the same exponent alpha approximately 0.2. This power law behavior is very robust; alpha is distributed over a narrow range, and shows almost no dependence on the cell type, on the nature of the protein complex which transmits the mechanical stress, nor on the typical length scale of the experiment. On the contrary, the prefactors A0 and G0 appear very sensitive to these parameters. Whereas the exponents alpha are normally distributed over the cell population, the prefactors A0 and G0 follow a log-normal repartition. These results are compared with other data published in the literature. We propose a global interpretation, based on a semiphenomenological model, which involves a broad distribution of relaxation times in the system. The model predicts the power law behavior and the statistical repartition of the mechanical parameters, as experimentally observed for the cells. Moreover, it leads to an estimate of the largest response time in the cytoskeletal network: tau(m) approximately 1000 s.
The Italian primary school-size distribution and the city-size: a complex nexus
Belmonte, Alessandro; Di Clemente, Riccardo; Buldyrev, Sergey V.
2014-01-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. PMID:24954714
Electrostatically confined nanoparticle interactions and dynamics.
Eichmann, Shannon L; Anekal, Samartha G; Bevan, Michael A
2008-02-05
We report integrated evanescent wave and video microscopy measurements of three-dimensional trajectories of 50, 100, and 250 nm gold nanoparticles electrostatically confined between parallel planar glass surfaces separated by 350 and 600 nm silica colloid spacers. Equilibrium analyses of single and ensemble particle height distributions normal to the confining walls produce net electrostatic potentials in excellent agreement with theoretical predictions. Dynamic analyses indicate lateral particle diffusion coefficients approximately 30-50% smaller than expected from predictions including the effects of the equilibrium particle distribution within the gap and multibody hydrodynamic interactions with the confining walls. Consistent analyses of equilibrium and dynamic information in each measurement do not indicate any roles for particle heating or hydrodynamic slip at the particle or wall surfaces, which would both increase diffusivities. Instead, lower than expected diffusivities are speculated to arise from electroviscous effects enhanced by the relative extent (kappaa approximately 1-3) and overlap (kappah approximately 2-4) of electrostatic double layers on the particle and wall surfaces. These results demonstrate direct, quantitative measurements and a consistent interpretation of metal nanoparticle electrostatic interactions and dynamics in a confined geometry, which provides a basis for future similar measurements involving other colloidal forces and specific biomolecular interactions.
Wavelet entropy characterization of elevated intracranial pressure.
Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao
2008-01-01
Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Plasma Theory and Simulation Group Annual Progress Report for 1991
1991-12-31
beam formation analitically : i) the resistance of the (low-density) to the final, high-density cylindrical wall can be approximated by the regime...model is developed that predicts the ion angular distribution function in a highly collisional sheath. In a previous study2, the normal ion velocity...gets a linear dispersion relation of the form W2 = k 2 (T + Ti/m. + m,), (40) which predicts ion acoustic waves. These waves have the highest frequency
2015-09-01
Extremely Lightweight Intrusion Detection (ELIDe) algorithm on an Android -based mobile device. Our results show that the hashing and inner product...approximately 2.5 megabits per second (assuming a normal distribution of packet sizes) with no significant packet loss. 15. SUBJECT TERMS ELIDe, Android , pcap...system (OS). To run ELIDe, the current version was ported for use on Android .4 2.1 Mobile Device After ELIDe was ported to the Android mobile
Histopathological and Digital Morphometrical Evaluation of Uterine Leiomyoma in Brazilian Women
da Silva, Ana Paula Fernandes; Mello, Luciano de Albuquerque; dos Santos, Erlene Roberta Ribeiro; Paz, Silvania Tavares; Cavalcanti, Carmelita Lima Bezerra; de Melo-Junior, Mario Ribeiro
2016-01-01
The current study aims to evaluate histopathological and digital morphometrical aspects associated with uterine leiomyomas in one hundred and fifty (150) patients diagnosed with leiomyoma. Uterine tissues were subjected to the histopathological and digital morphometric analyses of the interstitial collagen distribution. The analysis of medical records indicates that most of the women diagnosed with uterine leiomyomas (68.7%) are between 37 and 48 years old. As for the anatomic location of the tumors, approximately 61.4% of the patients had intramural and subserosal lesions. In 50% of the studied cases, the patients developed uterine leiomyomatosis (with more than eight tumors). As for the morphometric study, the average size of the interstitial collagen distribution held approximately 28.53% of the capture area, whereas it was of 7.43% in the normal tissue adjacent to the tumor. Another important aspect observed in the current study was the high rate of young women subjected to total hysterectomy, a fact that resulted in early and definitive sterility. PMID:27293441
Czopyk, L; Olko, P
2006-01-01
The analytical model of Xapsos used for calculating microdosimetric spectra is based on the observation that straggling of energy loss can be approximated by a log-normal distribution of energy deposition. The model was applied to calculate microdosimetric spectra in spherical targets of nanometer dimensions from heavy ions at energies between 0.3 and 500 MeV amu(-1). We recalculated the originally assumed 1/E(2) initial delta electrons spectrum by applying the Continuous Slowing Down Approximation for secondary electrons. We also modified the energy deposition from electrons of energy below 100 keV, taking into account the effective path length of the scattered electrons. Results of our model calculations agree favourably with results of Monte Carlo track structure simulations using MOCA-14 for light ions (Z = 1-8) of energy ranging from E = 0.3 to 10.0 MeV amu(-1) as well as with results of Nikjoo for a wall-less proportional counter (Z = 18).
Experiment S001: Zodiacal Light Photography
NASA Technical Reports Server (NTRS)
Ney, E. P.; Huch, W. F.
1971-01-01
Observations made during the Gemini 5, 9, and 10 missions in the context of their relation to ground-based and balloon-based experiments on dim-light phenomena are reported. Zodiacal light is the visible manifestation of dust grains in orbit around the sun. The negatives that were exposed on the Gemini 9 mission were studied by the use of an isodensitracer to produce intensity isophotes. Data on the following factors were obtained: (1) intensity distribution of the zodiacal light, both morning and evening; (2) the height and intensity of the airglow at various geographic positions; and (3) intensity distribution of the Milky Way in the region of the sky near Cygnus. Also, a previously unreported phenomenon was discovered. This phenomenon appeared as an upward extension of the normal 90-kilometer airglow layer. The extension was in the form of wisps or plumes approximately 5 deg wide and extending upward approximately 5 deg. The results obtained from pictures exposed on the Gemini 10 mission were of qualitative or geometrical value only.
Perkins, Bradford G; Häber, Thomas; Nesbitt, David J
2005-09-01
An apparatus for detailed study of quantum state-resolved inelastic energy transfer dynamics at the gas-liquid interface is described. The approach relies on supersonic jet-cooled molecular beams impinging on a continuously renewable liquid surface in a vacuum and exploits sub-Doppler high-resolution laser absorption methods to probe rotational, vibrational, and translational distributions in the scattered flux. First results are presented for skimmed beams of jet-cooled CO(2) (T(beam) approximately 15 K) colliding at normal incidence with a liquid perfluoropolyether (PFPE) surface at E(inc) = 10.6(8) kcal/mol. The experiment uses a tunable Pb-salt diode laser for direct absorption on the CO(2) nu(3) asymmetric stretch. Measured rotational distributions in both 00(0)0 and 01(1)0 vibrational manifolds indicate CO(2) inelastically scatters from the liquid surface into a clearly non-Boltzmann distribution, revealing nonequilibrium dynamics with average rotational energies in excess of the liquid (T(s) = 300 K). Furthermore, high-resolution analysis of the absorption profiles reveals that Doppler widths correspond to temperatures significantly warmer than T(s) and increase systematically with the J rotational state. These rotational and translational distributions are consistent with two distinct gas-liquid collision pathways: (i) a T approximately 300 K component due to trapping-desorption (TD) and (ii) a much hotter distribution (T approximately 750 K) due to "prompt" impulsive scattering (IS) from the gas-liquid interface. By way of contrast, vibrational populations in the CO(2) bending mode are inefficiently excited by scattering from the liquid, presumably reflecting much slower T-V collisional energy transfer rates.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira
2012-05-01
Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.
Taylor, Adam G.
2018-01-01
New solutions of potential functions for the bilinear vertical traction boundary condition are derived and presented. The discretization and interpolation of higher-order tractions and the superposition of the bilinear solutions provide a method of forming approximate and continuous solutions for the equilibrium state of a homogeneous and isotropic elastic half-space subjected to arbitrary normal surface tractions. Past experimental measurements of contact pressure distributions in granular media are reviewed in conjunction with the application of the proposed solution method to analysis of elastic settlement in shallow foundations. A numerical example is presented for an empirical ‘saddle-shaped’ traction distribution at the contact interface between a rigid square footing and a supporting soil medium. Non-dimensional soil resistance is computed as the reciprocal of normalized surface displacements under this empirical traction boundary condition, and the resulting internal stresses are compared to classical solutions to uniform traction boundary conditions. PMID:29892456
Roux, C Z
2009-05-01
Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.
Percentiles of the product of uncertainty factors for establishing probabilistic reference doses.
Gaylor, D W; Kodell, R L
2000-04-01
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Spatiotemporal Fractionation Schemes for Irradiating Large Cerebral Arteriovenous Malformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bussière, Marc R.; Chapman, Paul H.
2016-07-01
Purpose: To optimally exploit fractionation effects in the context of radiosurgery treatments of large cerebral arteriovenous malformations (AVMs). In current practice, fractionated treatments divide the dose evenly into several fractions, which generally leads to low obliteration rates. In this work, we investigate the potential benefit of delivering distinct dose distributions in different fractions. Methods and Materials: Five patients with large cerebral AVMs were reviewed and replanned for intensity modulated arc therapy delivered with conventional photon beams. Treatment plans allowing for different dose distributions in all fractions were obtained by performing treatment plan optimization based on the cumulative biologically effective dosemore » delivered at the end of treatment. Results: We show that distinct treatment plans can be designed for different fractions, such that high single-fraction doses are delivered to complementary parts of the AVM. All plans create a similar dose bath in the surrounding normal brain and thereby exploit the fractionation effect. This partial hypofractionation in the AVM along with fractionation in normal brain achieves a net improvement of the therapeutic ratio. We show that a biological dose reduction of approximately 10% in the healthy brain can be achieved compared with reference treatment schedules that deliver the same dose distribution in all fractions. Conclusions: Boosting complementary parts of the target volume in different fractions may provide a therapeutic advantage in fractionated radiosurgery treatments of large cerebral AVMs. The strategy allows for a mean dose reduction in normal brain that may be valuable for a patient population with an otherwise normal life expectancy.« less
K+-nucleus scattering using K {yields} {mu}{nu} decays as a normalization check
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael, R.; Hicks, K.; Bart, S.
1995-04-01
Elastic scattering of 720 and 620 MeV/c positive kaons from targets of {sup 12}C and {sup 6}Li has been measured up to laboratory angles of 42{degrees}. Since the magnitude of the cross sections is sensitive to nuclear medium effects, the K{yields}{mu}{nu} decay mode has been used to check the normalization. GEANT has been used to mimic the kaon decays over a path length of 12cm, with a correlated beam structure matching the experimental kaon beam. The corresponding muon distribution has been passed thru Monte Carlo simulations of the moby dick spectrometer. The results are compared with the experimental number ofmore » decay muons with good agreement. These results also agree with the normalization found using p-p elastic scattering. The normalized K{sup +} elastic data are compared to recent optical model predictions based on both Klein-Gordon and KDP equations in the impulse approximation.« less
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.
Bishara, Anthony J; Hittner, James B
2015-10-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Hittner, James B.
2014-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841
Dekkers, A L M; Slob, W
2012-10-01
In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Electrophoretic cell separation by means of microspheres
NASA Technical Reports Server (NTRS)
Smolka, A. J. K.; Nerren, B. H.; Margel, S.; Rembaum, A.
1979-01-01
The electrophoretic mobility of fixed human erythrocytes immunologically labeled with poly(vinylpyridine) or poly(glutaraldehyde) microspheres was reduced by approximately 40%. This observation was utilized in preparative scale electrophoretic separations of fixed human and turkey erythrocytes, the mobilities of which under normal physiological conditions do not differ sufficiently to allow their separation by continuous flow electrophoresis. We suggest that resolution in the electrophoretic separation of cell subpopulations, currently limited by finite and often overlapping mobility distributions, may be significantly enhanced by immunospecific labeling of target populations using microspheres.
Collective purchase behavior toward retail price changes
NASA Astrophysics Data System (ADS)
Ueno, Hiromichi; Watanabe, Tsutomu; Takayasu, Hideki; Takayasu, Misako
2011-02-01
By analyzing a huge amount of point-of-sale data collected from Japanese supermarkets, we find power law relationships between price and sales numbers. The estimated values of the exponents of these power laws depend on the category of products; however, they are independent of the stores, thereby implying the existence of universal human purchase behavior. The rate of sales numbers around these power laws are generally approximated by log-normal distributions implying that there are hidden random parameters, which might proportionally affect the purchase activity.
1983-10-01
failure envelopes compound the problems relating to the distribution of normal stress. Any given linear approximation of a curvilin- ear envelope will be...EEEEEEEEEEEEEE *mmmmmmumlll -. °. - t. - ji 11--1 i 1I -. E ’ ’" 1.25E MICROCOP REOUIO _ET HRNATIONA BUEUIFSANAD16- 11111 44 i , ... . , . ro. ’ . * * . .. U...Solution of cementing agents such as calcite and carbonates results in subse- quent strength losses. Oxidation to form new chemical compounds within
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
Padé approximant for normal stress differences in large-amplitude oscillatory shear flow
NASA Astrophysics Data System (ADS)
Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.
2018-04-01
Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.
Interim Report on Fatigue Characteristics of a Typical Metal Wing
NASA Technical Reports Server (NTRS)
Kepert, J L; Payne, A O
1956-01-01
Constant amplitude fatigue tests of seventy-two P-51D "Mustang" wings are reported. The tests were performed by a vibrational loading system and by an hydraulic loading device for conditions with and without varying amounts of pre-load. The results indicate that: (a) the frequency of occurrence of fatigue at any one location is related to the range of the loads applied, (b) the rate of propagation of visible cracks is more or less constant for a large portion of the life of the specimen, (c) the fatigue strength of the structure is similar to that of notched material having a theoretical stress concentration factor of more than 3.0, (d) the frequency distribution of fatigue life is approximately logarithmic normal, (e) the relative increase in fatigue life for a given pre-load depends on the maximum load of the loading cycle only, while the optimum pre-load value is approximately 85 percent of the ultimate failing load, and (f) that normal design procedure will not permit the determination of local stress levels with sufficient accuracy to determine the fatigue strength of an element of a redundant structure.
An unsteady lifting surface method for single rotation propellers
NASA Technical Reports Server (NTRS)
Williams, Marc H.
1990-01-01
The mathematical formulation of a lifting surface method for evaluating the steady and unsteady loads induced on single rotation propellers by blade vibration and inflow distortion is described. The scheme is based on 3-D linearized compressible aerodynamics and presumes that all disturbances are simple harmonic in time. This approximation leads to a direct linear integral relation between the normal velocity on the blade (which is determined from the blade geometry and motion) and the distribution of pressure difference across the blade. This linear relation is discretized by breaking the blade up into subareas (panels) on which the pressure difference is treated as approximately constant, and constraining the normal velocity at one (control) point on each panel. The piece-wise constant loads can then be determined by Gaussian elimination. The resulting blade loads can be used in performance, stability and forced response predictions for the rotor. Mathematical and numerical aspects of the method are examined. A selection of results obtained from the method is presented. The appendices include various details of the derivation that were felt to be secondary to the main development in Section 1.
The Angular Three-Point Correlation Function in the Quasi-linear Regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchalter, Ari; Kamionkowski, Marc; Jaffe, Andrew H.
2000-02-10
We calculate the normalized angular three-point correlation function (3PCF), q, as well as the normalized angular skewness, s{sub 3}, assuming the small-angle approximation, for a biased mass distribution in flat and open cold dark matter (CDM) models with Gaussian initial conditions. The leading-order perturbative results incorporate the explicit dependence on the cosmological parameters, the shape of the CDM transfer function, the linear evolution of the power spectrum, the form of the assumed redshift distribution function, and linear and nonlinear biasing, which may be evolving. Results are presented for different redshift distributions, including that appropriate for the APM Galaxy Survey, asmore » well as for a survey with a mean redshift of z{approx_equal}1 (such as the VLA FIRST Survey). Qualitatively, many of the results found for s{sub 3} and q are similar to those obtained in a related treatment of the spatial skewness and 3PCF, such as a leading-order correction to the standard result for s{sub 3} in the case of nonlinear bias (as defined for unsmoothed density fields), and the sensitivity of the configuration dependence of q to both cosmological and biasing models. We show that since angular correlation functions (CFs) are sensitive to clustering over a range of redshifts, the various evolutionary dependences included in our predictions imply that measurements of q in a deep survey might better discriminate between models with different histories, such as evolving versus nonevolving bias, that can have similar spatial CFs at low redshift. Our calculations employ a derived equation, valid for open, closed, and flat models, to obtain the angular bispectrum from the spatial bispectrum in the small-angle approximation. (c) (c) 2000. The American Astronomical Society.« less
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.
Evidence for the Gompertz curve in the income distribution of Brazil 1978-2005
NASA Astrophysics Data System (ADS)
Moura, N. J., Jr.; Ribeiro, M. B.
2009-01-01
This work presents an empirical study of the evolution of the personal income distribution in Brazil. Yearly samples available from 1978 to 2005 were studied and evidence was found that the complementary cumulative distribution of personal income for 99% of the economically less favorable population is well represented by a Gompertz curve of the form G(x) = exp [exp (A-Bx)], where x is the normalized individual income. The complementary cumulative distribution of the remaining 1% richest part of the population is well represented by a Pareto power law distribution P(x) = βx-α. This result means that similarly to other countries, Brazil’s income distribution is characterized by a well defined two class system. The parameters A, B, α, β were determined by a mixture of boundary conditions, normalization and fitting methods for every year in the time span of this study. Since the Gompertz curve is characteristic of growth models, its presence here suggests that these patterns in income distribution could be a consequence of the growth dynamics of the underlying economic system. In addition, we found out that the percentage share of both the Gompertzian and Paretian components relative to the total income shows an approximate cycling pattern with periods of about 4 years and whose maximum and minimum peaks in each component alternate at about every 2 years. This finding suggests that the growth dynamics of Brazil’s economic system might possibly follow a Goodwin-type class model dynamics based on the application of the Lotka-Volterra equation to economic growth and cycle.
Strongly interacting high-partial-wave Bose gas
NASA Astrophysics Data System (ADS)
Yao, Juan; Qi, Ran; Zhang, Pengfei
2018-04-01
Motivated by recent experimental progress, we make an investigation of p - and d -wave resonant Bose gas. An explanation of the Nozières and Schmitt-Rink (NSR) scheme in terms of two-channel model is provided. Different from the s -wave case, high-partial-wave interaction supports a quasibound state in the weak-coupling regime. Within the NSR approximation, we study the equation of state, critical temperature, and particle population distributions. We clarify the effect of the quasibound state on the phase diagram and the dimer production. A multicritical point where normal phase, atomic superfluid phase, and molecular superfluid phase meet is predicted within the phase diagram. We also show the occurrence of a resonant conversion between solitary atoms and dimers when temperature kBT approximates the quasibound energy.
a Predictive Model of Permeability for Fractal-Based Rough Rock Fractures during Shear
NASA Astrophysics Data System (ADS)
Huang, Na; Jiang, Yujing; Liu, Richeng; Li, Bo; Zhang, Zhenyu
This study investigates the roles of fracture roughness, normal stress and shear displacement on the fluid flow characteristics through three-dimensional (3D) self-affine fractal rock fractures, whose surfaces are generated using the modified successive random additions (SRA) algorithm. A series of numerical shear-flow tests under different normal stresses were conducted on rough rock fractures to calculate the evolutions of fracture aperture and permeability. The results show that the rough surfaces of fractal-based fractures can be described using the scaling parameter Hurst exponent (H), in which H = 3 - Df, where Df is the fractal dimension of 3D single fractures. The joint roughness coefficient (JRC) distribution of fracture profiles follows a Gauss function with a negative linear relationship between H and average JRC. The frequency curves of aperture distributions change from sharp to flat with increasing shear displacement, indicating a more anisotropic and heterogeneous flow pattern. Both the mean aperture and permeability of fracture increase with the increment of surface roughness and decrement of normal stress. At the beginning of shear, the permeability increases remarkably and then gradually becomes steady. A predictive model of permeability using the mean mechanical aperture is proposed and the validity is verified by comparisons with the experimental results reported in literature. The proposed model provides a simple method to approximate permeability of fractal-based rough rock fractures during shear using fracture aperture distribution that can be easily obtained from digitized fracture surface information.
About normal distribution on SO(3) group in texture analysis
NASA Astrophysics Data System (ADS)
Savyolova, T. I.; Filatov, S. V.
2017-12-01
This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.
NASA Astrophysics Data System (ADS)
Xu, Yu-Lin
The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.
Elastic microfibril distribution in the cornea: Differences between normal and keratoconic stroma.
White, Tomas L; Lewis, Philip N; Young, Robert D; Kitazawa, Koji; Inatomi, Tsutomu; Kinoshita, Shigeru; Meek, Keith M
2017-06-01
The optical and biomechanical properties of the cornea are largely governed by the collagen-rich stroma, a layer that represents approximately 90% of the total thickness. Within the stroma, the specific arrangement of superimposed lamellae provides the tissue with tensile strength, whilst the spatial arrangement of individual collagen fibrils within the lamellae confers transparency. In keratoconus, this precise stromal arrangement is lost, resulting in ectasia and visual impairment. In the normal cornea, we previously characterised the three-dimensional arrangement of an elastic fiber network spanning the posterior stroma from limbus-to-limbus. In the peripheral cornea/limbus there are elastin-containing sheets or broad fibers, most of which become microfibril bundles (MBs) with little or no elastin component when reaching the central cornea. The purpose of the current study was to compare this network with the elastic fiber distribution in post-surgical keratoconic corneal buttons, using serial block face scanning electron microscopy and transmission electron microscopy. We have demonstrated that the MB distribution is very different in keratoconus. MBs are absent from a region of stroma anterior to Descemet's membrane, an area that is densely populated in normal cornea, whilst being concentrated below the epithelium, an area in which they are absent in normal cornea. We contend that these latter microfibrils are produced as a biomechanical response to provide additional strength to the anterior stroma in order to prevent tissue rupture at the apex of the cone. A lack of MBs anterior to Descemet's membrane in keratoconus would alter the biomechanical properties of the tissue, potentially contributing to the pathogenesis of the disease. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Boardsen, Scott A.; Hospodarsky, George B.; Kletzing, Craig A.; Engebretson, Mark J.; Pfaff, Robert F.; Wygant, John R.; Kurth, William S.; Averkamp, Terrance F.; Bounds, Scott R.; Green, Jim L.;
2016-01-01
We present a statistical survey of the latitudinal structure of the fast magnetosonic wave mode detected by the Van Allen Probes spanning the time interval of 21 September 2012 to 1 August 2014. We show that statistically, the latitudinal occurrence of the wave frequency (f) normalized by the local proton cyclotron frequency (f(sub cP)) has a distinct funnel-shaped appearance in latitude about the magnetic equator similar to that found in case studies. By comparing the observed E/B ratios with the model E/B ratio, using the observed plasma density and background magnetic field magnitude as input to the model E/B ratio, we show that this mode is consistent with the extra-ordinary (whistler) mode at wave normal angles (theta(sub k)) near 90 deg. Performing polarization analysis on synthetic waveforms composed from a superposition of extra-ordinary mode plane waves with theta(sub k) randomly chosen between 87 and 90 deg, we show that the uncertainty in the derived wave normal is substantially broadened, with a tail extending down to theta(sub k) of 60 deg, suggesting that another approach is necessary to estimate the true distribution of theta(sub k). We find that the histograms of the synthetically derived ellipticities and theta(sub k) are consistent with the observations of ellipticities and theta(sub k) derived using polarization analysis.We make estimates of the median equatorial theta(sub k) by comparing observed and model ray tracing frequency-dependent probability occurrence with latitude and give preliminary frequency dependent estimates of the equatorial theta(sub k) distribution around noon and 4 R(sub E), with the median of approximately 4 to 7 deg from 90 deg at f/f(sub cP) = 2 and dropping to approximately 0.5 deg from 90 deg at f/f(sub cP) = 30. The occurrence of waves in this mode peaks around noon near the equator at all radial distances, and we find that the overall intensity of these waves increases with AE*, similar to findings of other studies.
Thornton, B S; Hung, W T; Irving, J
1991-01-01
The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.
Spectral analysis of groove spacing on Ganymede
NASA Technical Reports Server (NTRS)
Grimm, R. E.; Squyres, S. W.
1985-01-01
A quantitative analysis of groove spacing on Ganymede is described. Fourier transforms of a large number of photometric profiles across groove sets are calculated and the resulting power spectra are examined for the position and strength of peaks representing topographic periodicities. The geographic and global statistical distribution of groove wavelengths are examined, and these data are related to models of groove tectonism. It is found that groove spacing on Ganymede shows an approximately long-normal distribution with a minimum of about 3.5 km, a maximum of about 17 km, and a mean of 8.4 km. Groove spacing tends to be quite regular within a single groove set but can vary substantially from one groove set to another within a single geographic region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiles, A. N.; Loyalka, S. K.; Izaguirre, E. W.
Purpose: To develop a tissue model of Cherenkov radiation emitted from the skin surface during external beam radiotherapy. Imaging Cherenkov radiation emitted from human skin allows visualization of the beam position and potentially surface dose estimates, and our goal is to characterize the optical properties of these emissions. Methods: We developed a Monte Carlo model of Cherenkov radiation generated in a semi-infinite tissue slab by megavoltage x-ray beams with optical transmission properties determined by a two-layered skin model. We separate the skin into a dermal and an epidermal layer in our model, where distinct molecular absorbers modify the Cherenkov intensitymore » spectrum in each layer while we approximate the scattering properties with Mie and Rayleigh scattering from the highly structured molecular organization found in human skin. Results: We report on the estimated distributions of the Cherenkov wavelength spectrum, emission angles, and surface distribution for the modeled irradiated skin surface. The expected intensity distribution of Cherenkov radiation emitted from skin shows a distinct intensity peak around 475 nm, the blue region of the visible spectrum, between a pair of optical absorption bands in hemoglobin and a broad plateau beginning near 600 nm and extending to at least 700 nm where melanin and hemoglobin absorption are both low. We also find that the Cherenkov intensity decreases with increasing angle from the surface normal, the majority being emitted within 20 degrees of the surface normal. Conclusion: Our estimate of the spectral distribution of Cherenkov radiation emitted from skin indicates an advantage to using imaging devices with long wavelength spectral responsivity. We also expect the most efficient imaging to be near the surface normal where the intensity is greatest; although for contoured surfaces, the relative intensity across the surface may appear to vary due to decreasing Cherenkov intensity with increased angle from the skin normal. This research was supported in part by a GAANN Fellowship from the Department of Education.« less
NASA Astrophysics Data System (ADS)
Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.
2012-04-01
Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.
Wave turbulence in shallow water models.
Clark di Leoni, P; Cobelli, P J; Mininni, P D
2014-06-01
We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to 2048{2} points. In all simulations, the Froude number varies between 0.015 and 0.05, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, nonlinear waves are nondispersive and the spectrum of potential energy is compatible with ∼k{-2} scaling. For deeper (Boussinesq) flows, the nonlinear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, ∼k{-4/3}. In this latter case, the nonlinear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, R.M.; McKinney, R.A.; Brown, W.A.
1996-08-01
In this study, the three phase distributions (i.e., dissolved, colloidal, and particulate) of approximately 75 PCB congeners were measured in a marine sediment core from New Bedford Harbor, M.A. These distributions are the first report of colloid-PCB interactions in an environmentally contaminated sediment. Colloids <1.2 {mu}m in size were isolated from interstitial waters using reverse-phase chromatography with size-selected C{sub 18}. Regardless of solubility or chlorination, the majority of PCBs were associated with the particulate phase. PCBs were distributed in filtered interstitial waters between colloidal and dissolved phases as a function of solubility and degree of chlorination. Interstitial dissolved PCB concentrationsmore » generally agreed with literature-reported solubilities. The magnitude of colloid-PCB interactions increased with decreasing PCB solubility and increasing PCB chlorination. Di- and trichlorinated PCBs were approximately 40% and 65% colloidally bound, respectively, while tetra-, penta-, hexa-, hepta-, and octachlorinated PCBs were about 80% colloidally bound. As core depth increased, the magnitude of PCB-colloid interactions also increased. The relationships of organic carbon-normalized colloidal partitioning coefficient(K{sub coc}) to K{sub ow} for several PCB congeners were not linear and suggest that interstitial waters were not equilibrated. 62 refs., 8 figs., 3 tabs.« less
Range and Energy Straggling in Ion Beam Transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tai, Hsiang
2000-01-01
A first-order approximation to the range and energy straggling of ion beams is given as a normal distribution for which the standard deviation is estimated from the fluctuations in energy loss events. The standard deviation is calculated by assuming scattering from free electrons with a long range cutoff parameter that depends on the mean excitation energy of the medium. The present formalism is derived by extrapolating Payne's formalism to low energy by systematic energy scaling and to greater depths of penetration by a second-order perturbation. Limited comparisons are made with experimental data.
Characteristics of random inlet pressure fluctuations during flights of F-111A airplane
NASA Technical Reports Server (NTRS)
Costakis, W. G.
1977-01-01
Compressor face dynamic total pressures from four F-111 flights were analyzed. Statistics of the nonstationary data were investigated by analyzing the data in a quasi-stationary manner. Changes in the character of the dynamic signal are investigated as functions of flight conditions, time in flight, and location at the compressor face. The results, which are presented in the form of rms values, histograms, and power spectrum plots, show that the shape of the power spectra remains relatively flat while the histograms have an approximate normal distribution.
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
NASA Astrophysics Data System (ADS)
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
NASA Technical Reports Server (NTRS)
Glenny, R. W.; Robertson, H. T.; Hlastala, M. P.
2000-01-01
To determine whether vasoregulation is an important cause of pulmonary perfusion heterogeneity, we measured regional blood flow and gas exchange before and after giving prostacyclin (PGI(2)) to baboons. Four animals were anesthetized with ketamine and mechanically ventilated. Fluorescent microspheres were used to mark regional perfusion before and after PGI(2) infusion. The lungs were subsequently excised, dried inflated, and diced into approximately 2-cm(3) pieces (n = 1,208-1,629 per animal) with the spatial coordinates recorded for each piece. Blood flow to each piece was determined for each condition from the fluorescent signals. Blood flow heterogeneity did not change with PGI(2) infusion. Two other measures of spatial blood flow distribution, the fractal dimension and the spatial correlation, did not change with PGI(2) infusion. Alveolar-arterial O(2) differences did not change with PGI(2) infusion. We conclude that, in normal primate lungs during normoxia, vasomotor tone is not a significant cause of perfusion heterogeneity. Despite the heterogeneous distribution of blood flow, active regulation of regional perfusion is not required for efficient gas exchange.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Lung Cancer Pathological Image Analysis Using a Hidden Potts Model
Li, Qianyun; Yi, Faliu; Wang, Tao; Xiao, Guanghua; Liang, Faming
2017-01-01
Nowadays, many biological data are acquired via images. In this article, we study the pathological images scanned from 205 patients with lung cancer with the goal to find out the relationship between the survival time and the spatial distribution of different types of cells, including lymphocyte, stroma, and tumor cells. Toward this goal, we model the spatial distribution of different types of cells using a modified Potts model for which the parameters represent interactions between different types of cells and estimate the parameters of the Potts model using the double Metropolis-Hastings algorithm. The double Metropolis-Hastings algorithm allows us to simulate samples approximately from a distribution with an intractable normalizing constant. Our numerical results indicate that the spatial interaction between the lymphocyte and tumor cells is significantly associated with the patient’s survival time, and it can be used together with the cell count information to predict the survival of the patients. PMID:28615918
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
Proliferation and apoptosis in malignant and normal cells in B-cell non-Hodgkin's lymphomas.
Stokke, T.; Holte, H.; Smedshammer, L.; Smeland, E. B.; Kaalhus, O.; Steen, H. B.
1998-01-01
We have examined apoptosis and proliferation in lymph node cell suspensions from patients with B-cell non-Hodgkin's lymphoma using flow cytometry. A method was developed which allowed estimation of the fractions of apoptotic cells and cells in the S-phase of the cell cycle simultaneously with tumour-characteristic light chain expression. Analysis of the tumour S-phase fraction and the tumour apoptotic fraction in lymph node cell suspensions from 95 B-cell non-Hodgkin's lymphoma (NHL) patients revealed a non-normal distribution for both parameters. The median fraction of apoptotic tumour cells was 1.1% (25 percentiles 0.5%, 2.7%). In the same samples, the median fraction of apoptotic normal cells was higher than for the tumour cells (1.9%; 25 percentiles 0.7%, 4.0%; P = 0.03). The median fraction of tumour cells in S-phase was 1.4% (25 percentiles 0.8%, 4.8%), the median fraction of normal cells in S-phase was significantly lower than for the tumour cells (1.0%; 25 percentiles 0.6%, 1.9%; P = 0.004). When the number of cases was plotted against the logarithm of the S-phase fraction of the tumour cells, a distribution with two Gaussian peaks was needed to fit the data. One peak was centred around an S-phase fraction of 0.9%; the other was centred around 7%. These peaks were separated by a valley at approximately 3%, indicating that the S-phase fraction in NHL can be classified as 'low' (< 3%) or 'high' (> 3%), independent of the median S-phase fraction. The apoptotic fractions were log-normally distributed. The median apoptotic fraction was higher (1.5%) in the 'high' S-phase group than in the 'low' S-phase group (0.8%; P = 0.02). However, there was no significant correlation between the two parameters (P > 0.05). PMID:9667654
Gorodkin, Jan; Cirera, Susanna; Hedegaard, Jakob; Gilchrist, Michael J; Panitz, Frank; Jørgensen, Claus; Scheibye-Knudsen, Karsten; Arvin, Troels; Lumholdt, Steen; Sawera, Milena; Green, Trine; Nielsen, Bente J; Havgaard, Jakob H; Rosenkilde, Carina; Wang, Jun; Li, Heng; Li, Ruiqiang; Liu, Bin; Hu, Songnian; Dong, Wei; Li, Wei; Yu, Jun; Wang, Jian; Stærfeldt, Hans-Henrik; Wernersson, Rasmus; Madsen, Lone B; Thomsen, Bo; Hornshøj, Henrik; Bujie, Zhan; Wang, Xuegang; Wang, Xuefei; Bolund, Lars; Brunak, Søren; Yang, Huanming; Bendixen, Christian; Fredholm, Merete
2007-01-01
Background Knowledge of the structure of gene expression is essential for mammalian transcriptomics research. We analyzed a collection of more than one million porcine expressed sequence tags (ESTs), of which two-thirds were generated in the Sino-Danish Pig Genome Project and one-third are from public databases. The Sino-Danish ESTs were generated from one normalized and 97 non-normalized cDNA libraries representing 35 different tissues and three developmental stages. Results Using the Distiller package, the ESTs were assembled to roughly 48,000 contigs and 73,000 singletons, of which approximately 25% have a high confidence match to UniProt. Approximately 6,000 new porcine gene clusters were identified. Expression analysis based on the non-normalized libraries resulted in the following findings. The distribution of cluster sizes is scaling invariant. Brain and testes are among the tissues with the greatest number of different expressed genes, whereas tissues with more specialized function, such as developing liver, have fewer expressed genes. There are at least 65 high confidence housekeeping gene candidates and 876 cDNA library-specific gene candidates. We identified differential expression of genes between different tissues, in particular brain/spinal cord, and found patterns of correlation between genes that share expression in pairs of libraries. Finally, there was remarkable agreement in expression between specialized tissues according to Gene Ontology categories. Conclusion This EST collection, the largest to date in pig, represents an essential resource for annotation, comparative genomics, assembly of the pig genome sequence, and further porcine transcription studies. PMID:17407547
Zavgorodni, S F
2001-09-01
With modern urbanization trends, situations occur where a general-purpose multi-storey building would have to be constructed adjacent to a radiotherapy facility. In cases where the building would not be in the primary x-ray beam, "skyshine" radiation is normally accounted for. The radiation scattered from the roof side-wise towards the building can also be a major contributing factor. However, neither the NCRP reports nor recently published literature considered this. The current paper presents a simple formula to calculate the dose contribution from scattered radiation in such circumstances. This equation includes workload, roof thickness, field size, distance to the reference point and a normalized angular photon distribution function f(theta), where theta is the angle between central axis of the primary beam and photon direction. The latter was calculated by the Monte Carlo method (EGS4 code) for each treatment machine in our department. For angles theta exceeding approximately 20 degrees (i.e., outside the primary beam and its penumbra) the angular distribution function f(theta) was found to have little dependence on the shielding barrier thickness and the beam energy. An analytical approximation of this function has been obtained. Measurements have been performed to verify this calculation technique. An agreement within 40% was found between calculated and measured dose rates. The latter combined the scattered radiation and the dose from "skyshine" radiation. Some overestimation of the dose resulted from uncertainties in the radiotherapy building drawings and in evaluation of the "skyshine" contribution.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
Xing, Chao; Elston, Robert C
2006-07-01
The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Zheng, Karen S; Small, William C; Mittal, Pardeep K; Cai, Qingpo; Kang, Jian; Moreno, Courtney C
2016-01-01
The purpose was to determine the normal distribution of distended colon volumes as a guide for rectal contrast material administration protocols. All computed tomography colonography studies performed at Emory University Hospital, Atlanta, Georgia, between January 2009 and January 2015, were reviewed retrospectively. In total, 85 subjects were included in the analysis (64% [54 of 85] female and 36% [31 of 85] male). Mean patient age was 65 years (range: 42-86y). Distended colon volumes were determined from colon length and transaxial diameter measurements made using a 3-dimensional workstation. Age, sex, race, height, weight, and body mass index were recorded. The normal distributions of distended colon volumes and lengths were determined. Correlations between colonic volume and colonic length, and demographic variables were assessed. Mean colon volume was 2.1L (range: 0.7-4.4L). Nearly, 17% of patients had a distended colonic volume of >3L. Mean colon length was 197cm (range: 118-285cm). A weak negative correlation was found between age and colonic volume (r = -0.221; P = 0.04). A weak positive correlation was found between body mass index and colonic length (r = 0.368; P = 0.007). Otherwise, no significant correlations were found for distended colonic volume or length and demographic variables. In conclusion, an average of approximately 2L of contrast material may be necessary to achieve full colonic opacification. This volume is larger than previously reported volumes (0.8-1.5L) for rectal contrast material administration protocols. Copyright © 2015 Mosby, Inc. All rights reserved.
Miedema, H M; Oudshoorn, C G
2001-01-01
We present a model of the distribution of noise annoyance with the mean varying as a function of the noise exposure. Day-night level (DNL) and day-evening-night level (DENL) were used as noise descriptors. Because the entire annoyance distribution has been modeled, any annoyance measure that summarizes this distribution can be calculated from the model. We fitted the model to data from noise annoyance studies for aircraft, road traffic, and railways separately. Polynomial approximations of relationships implied by the model for the combinations of the following exposure and annoyance measures are presented: DNL or DENL, and percentage "highly annoyed" (cutoff at 72 on a scale of 0-100), percentage "annoyed" (cutoff at 50 on a scale of 0-100), or percentage (at least) "a little annoyed" (cutoff at 28 on a scale of 0-100). These approximations are very good, and they are easier to use for practical calculations than the model itself, because the model involves a normal distribution. Our results are based on the same data set that was used earlier to establish relationships between DNL and percentage highly annoyed. In this paper we provide better estimates of the confidence intervals due to the improved model of the relationship between annoyance and noise exposure. Moreover, relationships using descriptors other than DNL and percentage highly annoyed, which are presented here, have not been established earlier on the basis of a large dataset. PMID:11335190
On the distribution of scaling hydraulic parameters in a spatially anisotropic banana field
NASA Astrophysics Data System (ADS)
Regalado, Carlos M.
2005-06-01
When modeling soil hydraulic properties at field scale it is desirable to approximate the variability in a given area by means of some scaling transformations which relate spatially variable local hydraulic properties to global reference characteristics. Seventy soil cores were sampled within a drip irrigated banana plantation greenhouse on a 14×5 array of 2.5 m×5 m rectangles at 15 cm depth, to represent the field scale variability of flow related properties. Saturated hydraulic conductivity and water retention characteristics were measured in these 70 soil cores. van Genuchten water retention curves (WRC) with optimized m ( m≠1-1/ n) were fitted to the WR data and a general Mualem-van Genuchten model was used to predict hydraulic conductivity functions for each soil core. A scaling law, of the form ν=ανi*, was fitted to soil hydraulic data, such that the original hydraulic parameters νi were scaled down to a reference curve with parameters νi*. An analytical expression, in terms of Beta functions, for the average suction value, hc, necessary to apply the above scaling method, was obtained. A robust optimization procedure with fast convergence to the global minimum is used to find the optimum hc, such that dispersion is minimized in the scaled data set. Via the Box-Cox transformation P(τ)=(αiτ-1)/τ, Box-Cox normality plots showed that scaling factors for the suction ( αh) and hydraulic conductivity ( αk) were approximately log-normally distributed (i.e. τ=0), as it would be expected for such dynamic properties involving flow. By contrast static soil related properties as αθ were found closely Gaussian, although a power τ=3/4 was best for approaching normality. Application of four different normality tests (Anderson-Darling, Shapiro-Wilk, Kolmogorov-Smirnov and χ2 goodness-of-fit tests) rendered some contradictory results among them, thus suggesting that this widely extended practice is not recommended for providing a suitable probability density function for the scaling parameters, αi. Some indications for the origin of these disagreements, in terms of population size and test constraints, are pointed out. Visual inspection of normal probability plots can also lead to erroneous results. The scaling parameters αθ and αK show a sinusoidal spatial variation coincident with the underlying alignment of banana plants on the field. Such anisotropic distribution is explained in terms of porosity variations due to processes promoting soil degradation as surface desiccation and soil compaction, induced by tillage and localized irrigation of banana plants, and it is quantified by means of cross-correlograms.
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
R/S analysis of reaction time in Neuron Type Test for human activity in civil aviation
NASA Astrophysics Data System (ADS)
Zhang, Hong-Yan; Kang, Ming-Cui; Li, Jing-Qiang; Liu, Hai-Tao
2017-03-01
Human factors become the most serious problem leading to accidents of civil aviation, which stimulates the design and analysis of Neuron Type Test (NTT) system to explore the intrinsic properties and patterns behind the behaviors of professionals and students in civil aviation. In the experiment, normal practitioners' reaction time sequences, collected from NTT, exhibit log-normal distribution approximately. We apply the χ2 test to compute the goodness-of-fit by transforming the time sequence with Box-Cox transformation to cluster practitioners. The long-term correlation of different individual practitioner's time sequence is represented by the Hurst exponent via Rescaled Range Analysis, also named by Range/Standard deviation (R/S) Analysis. The different Hurst exponent suggests the existence of different collective behavior and different intrinsic patterns of human factors in civil aviation.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.
Kobayashi, Katsumi
2005-05-01
The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Neutron Capture Energies for Flux Normalization and Approximate Model for Gamma-Smeared Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Liu, Yuxuan
The Consortium for Advanced Simulation of Light Water Reactors (CASL) Virtual Environment for Reactor Applications (VERA) neutronics simulator MPACT has used a single recoverable fission energy for each fissionable nuclide assuming that all recoverable energies come only from fission reaction, for which capture energy is merged with fission energy. This approach includes approximations and requires improvement by separating capture energy from the merged effective recoverable energy. This report documents the procedure to generate recoverable neutron capture energies and the development of a program called CapKappa to generate capture energies. Recoverable neutron capture energies have been generated by using CapKappa withmore » the evaluated nuclear data file (ENDF)/B-7.0 and 7.1 cross section and decay libraries. The new capture kappas were compared to the current SCALE-6.2 and the CASMO-5 capture kappas. These new capture kappas have been incorporated into the Simplified AMPX 51- and 252-group libraries, and they can be used for the AMPX multigroup (MG) libraries and the SCALE code package. The CASL VERA neutronics simulator MPACT does not include a gamma transport capability, which limits it to explicitly estimating local energy deposition from fission, neutron, and gamma slowing down and capture. Since the mean free path of gamma rays is typically much longer than that for the neutron, and the total gamma energy is about 10% to the total energy, the gamma-smeared power distribution is different from the fission power distribution. Explicit local energy deposition through neutron and gamma transport calculation is significantly important in multi-physics whole core simulation with thermal-hydraulic feedback. Therefore, the gamma transport capability should be incorporated into the CASL neutronics simulator MPACT. However, this task will be timeconsuming in developing the neutron induced gamma production and gamma cross section libraries. This study is to investigate an approximate model to estimate gammasmeared power distribution without performing any gamma transport calculation. A simple approximate gamma smearing model has been investigated based on the facts that pinwise gamma energy depositions are almost flat over a fuel assembly, and assembly-wise gamma energy deposition is proportional to kappa-fission energy deposition. The approximate gamma smearing model works well for single assembly cases, and can partly improve the gamma smeared power distribution for the whole core model. Although the power distributions can be improved by the approximate gamma smearing model, still there is an issue to explicitly obtain local energy deposition. A new simple approach or gamma transport/diffusion capability may need to be incorporated into MPACT to estimate local energy deposition for more robust multi-physics simulation.« less
NASA Astrophysics Data System (ADS)
Fernández-Oliveras, Alicia; Costa, Manuel F. M.; Pecho, Oscar E.; Rubiño, Manuel; Pérez, María. M.
2013-11-01
Surface properties are essential for a complete characterization of biomaterials. In restorative dentistry, the study of the surface properties of materials meant to replace dental tissues in an irreversibly diseased tooth is important to avoid harmful changes in future treatments. We have experimentally analyzed the surface characterization parameters of two different types of dental-resin composites and pre-sintered and sintered zirconia ceramics. We studied two shades of both composite types and two sintered zirconia ceramics: colored and uncolored. Moreover, a surface treatment was applied to one specimen of each dental-resin. All the samples were submitted to rugometric and microtopographic non-invasive inspection with the MICROTOP.06.MFC laser microtopographer in order to gather meaningful statistical parameters such as the average roughness (Ra), the root-mean-square deviation (Rq), the skewness (Rsk), and the kurtosis of the surface height distribution (Rku). For a comparison of the different biomaterials, the uncertainties associated to the surface parameters were also determined. With respect to Ra and Rq, significant differences between the composite shades were found. Among the dental resins, the nanocomposite presented the highest values and, for the zirconia ceramics, the pre-sintered sample registered the lowest ones. The composite performance may have been due to cluster-formation variations. Except for the composites with the surface treatment, the sample surfaces had approximately a normal distribution of heights. The surface treatment applied to the composites increased the average roughness and moved the height distribution farther away from the normal distribution. The zirconia-sintering process resulted in higher average roughness without affecting the height distribution.
Zeng, Fan-Hua; Wang, Zhi-Ming; Wang, Mian-Zhen; Lan, Ya-Jia
2004-12-01
To establish the scale of the norm of occupational stress on the professionals and put it into practice. T scores were linear transformations of raw scores, derived to have a mean of 50 and a standard deviation of 10. The scale standard of the norm was formulated in line with the principle of normal distribution. (1) For the occupational role questionnaire (ORQ) and personal strain questionnaire (PSQ) scales, high scores suggested significant levels of occupational stress and psychological strain, respectively. T scores >/= 70 indicated a strong probability of maladaptive stress, debilitating strain, or both. T scores in 60 approximately 69 suggested mild levels of maladaptive stress and strain, and in 40 approximately 59 were within one standard deviation of the mean and should be interpreted as being within normal range. T scores < 40 indicated a relative absence of occupational stress or psychological strain. For the personal resources questionnaire (PRQ) scales, high scores indicated highly developed coping resources. T scores < 30 indicated a significant lack of coping resources. T scores in 30 approximately 39 suggested mild deficits in coping skills, and in 40 approximately 59 indicated average coping resources, where as higher scores (i.e., >/= 60) indicated increasingly strong coping resources. (2) This study provided raw score to T-score conversion tables for each OSI-R scale for the total normative sample as well as for gender, and several occupational groups, including professional engineer, professional health care, economic business, financial business, law, education and news. OSI-R profile forms for total normative samples, gender and occupation were also offered according to the conversion tables. The norm of occupational stress can be used as screening tool, organizational/occupational assessment, guide to occupational choice and intervention measures.
NASA Astrophysics Data System (ADS)
Pu, Yang; Chen, Jun; Wang, Wubao
2014-02-01
The scattering coefficient, μs, the anisotropy factor, g, the scattering phase function, p(θ), and the angular dependence of scattering intensity distributions of human cancerous and normal prostate tissues were systematically investigated as a function of wavelength, scattering angle and scattering particle size using Mie theory and experimental parameters. The Matlab-based codes using Mie theory for both spherical and cylindrical models were developed and applied for studying the light propagation and the key scattering properties of the prostate tissues. The optical and structural parameters of tissue such as the index of refraction of cytoplasm, size of nuclei, and the diameter of the nucleoli for cancerous and normal human prostate tissues obtained from the previous biological, biomedical and bio-optic studies were used for Mie theory simulation and calculation. The wavelength dependence of scattering coefficient and anisotropy factor were investigated in the wide spectral range from 300 nm to 1200 nm. The scattering particle size dependence of μs, g, and scattering angular distributions were studied for cancerous and normal prostate tissues. The results show that cancerous prostate tissue containing larger size scattering particles has more contribution to the forward scattering in comparison with the normal prostate tissue. In addition to the conventional simulation model that approximately considers the scattering particle as sphere, the cylinder model which is more suitable for fiber-like tissue frame components such as collagen and elastin was used for developing a computation code to study angular dependence of scattering in prostate tissues. To the best of our knowledge, this is the first study to deal with both spherical and cylindrical scattering particles in prostate tissues.
NASA Astrophysics Data System (ADS)
Khalil, Nagi
2018-04-01
The homogeneous cooling state (HCS) of a granular gas described by the inelastic Boltzmann equation is reconsidered. As usual, particles are taken as inelastic hard disks or spheres, but now the coefficient of normal restitution α is allowed to take negative values , which is a simple way of modeling more complicated inelastic interactions. The distribution function of the HCS is studied at the long-time limit, as well as intermediate times. At the long-time limit, the relevant information of the HCS is given by a scaling distribution function , where the time dependence occurs through a dimensionless velocity c. For , remains close to the Gaussian distribution in the thermal region, its cumulants and exponential tails being well described by the first Sonine approximation. In contrast, for , the distribution function becomes multimodal, its maxima located at , and its observable tails algebraic. The latter is a consequence of an unbalanced relaxation–dissipation competition, and is analytically demonstrated for , thanks to a reduction of the Boltzmann equation to a Fokker–Plank-like equation. Finally, a generalized scaling solution to the Boltzmann equation is also found . Apart from the time dependence occurring through the dimensionless velocity, depends on time through a new parameter β measuring the departure of the HCS from its long-time limit. It is shown that describes the time evolution of the HCS for almost all times. The relevance of the new scaling is also discussed.
An Integrable Approximation for the Fermi Pasta Ulam Lattice
NASA Astrophysics Data System (ADS)
Rink, Bob
This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.
NASA Astrophysics Data System (ADS)
Wei, Linsheng; Xu, Min; Yuan, Dingkun; Zhang, Yafang; Hu, Zhaoji; Tan, Zhihong
2014-10-01
The electron drift velocity, electron energy distribution function (EEDF), density-normalized effective ionization coefficient and density-normalized longitudinal diffusion velocity are calculated in SF6-O2 and SF6-Air mixtures. The experimental results from a pulsed Townsend discharge are plotted for comparison with the numerical results. The reduced field strength varies from 40 Td to 500 Td (1 Townsend=10-17 V·cm2) and the SF6 concentration ranges from 10% to 100%. A Boltzmann equation associated with the two-term spherical harmonic expansion approximation is utilized to gain the swarm parameters in steady-state Townsend. Results show that the accuracy of the Boltzmann solution with a two-term expansion in calculating the electron drift velocity, electron energy distribution function, and density-normalized effective ionization coefficient is acceptable. The effective ionization coefficient presents a distinct relationship with the SF6 content in the mixtures. Moreover, the E/Ncr values in SF6-Air mixtures are higher than those in SF6-O2 mixtures and the calculated value E/Ncr in SF6-O2 and SF6-Air mixtures is lower than the measured value in SF6-N2. Parametric studies conducted on these parameters using the Boltzmann analysis offer substantial insight into the plasma physics, as well as a basis to explore the ozone generation process.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Breakup effects on alpha spectroscopic factors of 16O
NASA Astrophysics Data System (ADS)
Adhikari, S.; Basu, C.; Sugathan, P.; Jhinghan, A.; Behera, B. R.; Saneesh, N.; Kaur, G.; Thakur, M.; Mahajan, R.; Dubey, R.; Mitra, A. K.
2017-01-01
The triton angular distribution for the 12C(7Li,t)16O* reaction is measured at 20 MeV, populating discrete states of 16O. Continuum discretized coupled reaction channel calculations are used to to extract the alpha spectroscopic properties of 16O states instead of the distorted wave born approximation theory to include the effects of breakup on the transfer process. The alpha reduced width, spectroscopic factors and the asymptotic normalization constant (ANC) of 16O states are extracted. The error in the spectroscopic factor is about 35% and in that of the ANC about 27%.
Experimental quantum cryptography with qutrits
NASA Astrophysics Data System (ADS)
Gröblacher, Simon; Jennewein, Thomas; Vaziri, Alipasha; Weihs, Gregor; Zeilinger, Anton
2006-05-01
We produce two identical keys using, for the first time, entangled trinary quantum systems (qutrits) for quantum key distribution. The advantage of qutrits over the normally used binary quantum systems is an increased coding density and a higher security margin. The qutrits are encoded into the orbital angular momentum of photons, namely Laguerre Gaussian modes with azimuthal index l + 1, 0 and -1, respectively. The orbital angular momentum is controlled with phase holograms. In an Ekert-type protocol the violation of a three-dimensional Bell inequality verifies the security of the generated keys. A key is obtained with a qutrit error rate of approximately 10%.
NASA Technical Reports Server (NTRS)
Mchugh, James G
1937-01-01
Report presents the results of pressure-distribution measurements on a 1/40-scale model of the U. S. Airship "Akron" conducted in the NACA 20-foot wind tunnel. The measurements were made on the starboard fin of each of four sets of horizontal tail surfaces, all of approximately the same area but differing in span-chord ratio, for five angles of pitch varying from 11.6 degrees to 34 degrees, for four elevator angles, and at air speeds ranging from 56 to 77 miles per hour. Pressures were also measured at 13 stations along the rear half of the port side of the hull at one elevator setting for the same five angles of pitch and at an air speed of approximately 91 miles per hour. The normal force on the fin and the moment of forces about the fin root were determined. The results indicate that, ignoring the effect on drag, it would be advantageous from structural considerations to use a fin of lower span-chord ratio than that used on the "Akron."
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chasapis, Alexandros; Matthaeus, W. H.; Parashar, T. N.
Using data from the Magnetospheric Multiscale (MMS) and Cluster missions obtained in the solar wind, we examine second-order and fourth-order structure functions at varying spatial lags normalized to ion inertial scales. The analysis includes direct two-spacecraft results and single-spacecraft results employing the familiar Taylor frozen-in flow approximation. Several familiar statistical results, including the spectral distribution of energy, and the sale-dependent kurtosis, are extended down to unprecedented spatial scales of ∼6 km, approaching electron scales. The Taylor approximation is also confirmed at those small scales, although small deviations are present in the kinetic range. The kurtosis is seen to attain verymore » high values at sub-proton scales, supporting the previously reported suggestion that monofractal behavior may be due to high-frequency plasma waves at kinetic scales.« less
Understanding a Normal Distribution of Data.
Maltenfort, Mitchell G
2015-12-01
Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
NASA Technical Reports Server (NTRS)
Low, P. A.; Denq, J. C.; Opfer-Gehrking, T. L.; Dyck, P. J.; O'Brien, P. C.; Slezak, J. M.
1997-01-01
Normative data are limited on autonomic function tests, especially beyond age 60 years. We therefore evaluated these tests in a total of 557 normal subjects evenly distributed by age and gender from 10 to 83 years. Heart rate (HR) response to deep breathing fell with increasing age. Valsalva ratio varied with both age and gender. QSART (quantitative sudomotor axon-reflex test) volume was consistently greater in men (approximately double) and progressively declined with age for all three lower extremity sites but not the forearm site. Orthostatic blood pressure reduction was greater with increasing age. HR at rest was significantly higher in women, and the increment with head-up tilt fell with increasing age. For no tests did we find a regression to zero, and some tests seem to level off with increasing age, indicating that diagnosis of autonomic failure was possible to over 80 years of age.
Statistical characterization of thermal plumes in turbulent thermal convection
NASA Astrophysics Data System (ADS)
Zhou, Sheng-Qi; Xie, Yi-Chao; Sun, Chao; Xia, Ke-Qing
2016-09-01
We report an experimental study on the statistical properties of the thermal plumes in turbulent thermal convection. A method has been proposed to extract the basic characteristics of thermal plumes from temporal temperature measurement inside the convection cell. It has been found that both plume amplitude A and cap width w , in a time domain, are approximately in the log-normal distribution. In particular, the normalized most probable front width is found to be a characteristic scale of thermal plumes, which is much larger than the thermal boundary layer thickness. Over a wide range of the Rayleigh number, the statistical characterizations of the thermal fluctuations of plumes, and the turbulent background, the plume front width and plume spacing have been discussed and compared with the theoretical predictions and morphological observations. For the most part good agreements have been found with the direct observations.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
Pressure Distribution Over a Symmetrical Airfoil Section with Trailing Edge Flap
NASA Technical Reports Server (NTRS)
Jacobs, Eastman N; Pinkerton, Robert M
1931-01-01
Measurements were made to determine the distribution of pressure over one section of an R. A. F. 30 (symmetrical) airfoil with trailing edge flaps. In order to study the effect of scale measurements were made with air densities of approximately 1 and 20 atmospheres. Isometric diagrams of pressure distribution are given to show the effect of change in incidence, flap displacement, and scale upon the distribution. Plots of normal force coefficient versus angle of attack for different flap displacements are given to show the effect of a displaced flap. Plots are given of both the experimental and theoretical characteristic coefficients versus flap angle, in order to provide a comparison with the theory. It is concluded that for small flap displacements the agreement for the pitching and hinge moments is such that it warrants the use of the theoretical parameters. However, the agreement for the lift is not as good, particularly for the smaller flaps. In an appendix, an example is given of the calculation of the load and moments on an airfoil with hinged flap from these parameters.
Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix
NASA Astrophysics Data System (ADS)
Saxena, Nupur; Kumar, Pragati; Gupta, Vinay
2015-05-01
Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
NASA Technical Reports Server (NTRS)
Sakuraba, K.; Tsuruda, Y.; Hanada, T.; Liou, J.-C.; Akahoshi, Y.
2007-01-01
This paper summarizes two new satellite impact tests conducted in order to investigate on the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low velocity of 1.5 km/s using a 40-gram aluminum alloy sphere, whereas the second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-gram aluminum alloy sphere by two-stage light gas gun in Kyushu Institute of Technology. To date, approximately 1,500 fragments from each impact test have been collected for detailed analysis. Each piece was analyzed based on the method used in the NASA Standard Breakup Model 2000 revision. The detailed analysis will conclude: 1) the similarity in mass distribution of fragments between low and hyper-velocity impacts encourages the development of a general-purpose distribution model applicable for a wide impact velocity range, and 2) the difference in area-to-mass ratio distribution between the impact experiments and the NASA standard breakup model suggests to describe the area-to-mass ratio by a bi-normal distribution.
Photoballistics of volcanic jet activity at Stromboli, Italy
NASA Technical Reports Server (NTRS)
Chouet, B.; Hamisevicz, N.; Mcgetchin, T. R.
1974-01-01
Two night eruptions of the volcano Stromboli were studied through 70-mm photography. Single-camera techniques were used. Particle sphericity, constant velocity in the frame, and radial symmetry were assumed. Properties of the particulate phase found through analysis include: particle size, velocity, total number of particles ejected, angular dispersion and distribution in the jet, time variation of particle size and apparent velocity distribution, averaged volume flux, and kinetic energy carried by the condensed phase. The frequency distributions of particle size and apparent velocities are found to be approximately log normal. The properties of the gas phase were inferred from the fact that it was the transporting medium for the condensed phase. Gas velocity and time variation, volume flux of gas, dynamic pressure, mass erupted, and density were estimated. A CO2-H2O mixture is possible for the observed eruptions. The flow was subsonic. Velocity variations may be explained by an organ pipe resonance. Particle collimation may be produced by a Magnus effect.
Radiation exposure assessment for portsmouth naval shipyard health studies.
Daniels, R D; Taulbee, T D; Chen, P
2004-01-01
Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.
NASA Astrophysics Data System (ADS)
Gershenson, Carlos
Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it
Forecasting the impact of transport improvements on commuting and residential choice
NASA Astrophysics Data System (ADS)
Elhorst, J. Paul; Oosterhaven, Jan
2006-03-01
This paper develops a probabilistic, competing-destinations, assignment model that predicts changes in the spatial pattern of the working population as a result of transport improvements. The choice of residence is explained by a new non-parametric model, which represents an alternative to the popular multinominal logit model. Travel times between zones are approximated by a normal distribution function with different mean and variance for each pair of zones, whereas previous models only use average travel times. The model’s forecast error of the spatial distribution of the Dutch working population is 7% when tested on 1998 base-year data. To incorporate endogenous changes in its causal variables, an almost ideal demand system is estimated to explain the choice of transport mode, and a new economic geography inter-industry model (RAEM) is estimated to explain the spatial distribution of employment. In the application, the model is used to forecast the impact of six mutually exclusive Dutch core-periphery railway proposals in the projection year 2020.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Wang, Yizhe; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi
2017-03-01
To provide a reference for the HIFU clinical therapeutic planning, the temperature distribution and lesion volume are analyzed by the numerical simulation. The adopted numerical simulation is based on a transcranial ultrasound therapy model, including an 8 annular-element curved phased array transducer. The acoustic pressure and temperature elevation are calculated by using the approximation of Westervelt Formula and the Pennes Heat Transfer Equation. In addition, the Time Reversal theory and eliminating hot spot technique are combined to optimize the temperature distribution. With different input powers and exposure times, the lesion volume is evaluated based on temperature threshold theory. The lesion region could be restored at the expected location by the time reversal theory. Although the lesion volume reduces after eliminating the peak temperature in the skull and more input power and exposure time is required, the injury of normal tissue around skull could be reduced during the HIFU therapy. The prediction of thermal deposition in the skull and the lesion region could provide a reference for clinical therapeutic dose.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Ikarashi, Nobutomo; Kagami, Mai; Kobayashi, Yasushi; Ishii, Makoto; Toda, Takahiro; Ochiai, Wataru; Sugiyama, Kiyoshi
2011-06-01
In humans, digoxin is mainly eliminated through the kidneys unchanged, and renal clearance represents approximately 70% of the total clearance. In this study, we used the mouse models to examine digoxin pharmacokinetics in polyuria induced by diabetes mellitus and lithium carbonate (Li(2)CO(3)) administration, including mechanistic evaluation of the contribution of glomerular filtration, tubular secretion, and tubular reabsorption. After digoxin administration to streptozotocin (STZ)-induced diabetic mice, digoxin CL/F increased to approximately 2.2 times that in normal mice. After treatment with Li(2)CO(3) (0.2%) for 10 days, the CL/F increased approximately 1.1 times for normal mice and 1.6 times for STZ mice. Creatinine clearance (CLcr) and the renal mRNA expression levels of mdr1a did not differ significantly between the normal, STZ, and Li(2)CO(3)-treated mice. The urine volume of STZ mice was approximately 26 mL/day, 22 times that of normal mice. The urine volume of Li(2)CO(3)-treated mice increased approximately 7.3 times for normal mice and 2.3 times for STZ mice. These results suggest that the therapeutic effect of digoxin may be significantly reduced in the presence of polyuria either induced by diabetes mellitus or manifested as an adverse effect of Li(2)CO(3) in diabetic patients, along with increased urine volume.
Current distribution in tissues with conducted electrical weapons operated in drive-stun mode.
Panescu, Dorin; Kroll, Mark W; Brave, Michael
2016-08-01
The TASER® conducted electrical weapon (CEW) is best known for delivering electrical pulses that can temporarily incapacitate subjects by overriding normal motor control. The alternative drive-stun mode is less understood and the goal of this paper is to analyze the distribution of currents in tissues when the CEW is operated in this mode. Finite element modeling (FEM) was used to approximate current density in tissues with boundary electrical sources placed 40 mm apart. This separation was equivalent to the distance between drive-stun mode TASER X26™, X26P, X2 CEW electrodes located on the device itself and between those located on the expended CEW cartridge. The FEMs estimated the amount of current flowing through various body tissues located underneath the electrodes. The FEM simulated the attenuating effects of both a thin and of a normal layer of fat. The resulting current density distributions were used to compute the residual amount of current flowing through deeper layers of tissue. Numerical modeling estimated that the skin, fat and skeletal muscle layers passed at least 86% or 91% of total CEW current, assuming a thin or normal fat layer thickness, respectively. The current density and electric field strength only exceeded thresholds which have increased probability for ventricular fibrillation (VFTJ), or for cardiac capture (CCTE), in the skin and the subdermal fat layers. The fat layer provided significant attenuation of drive-stun CEW currents. Beyond the skeletal muscle layer, only fractional amounts of the total CEW current were estimated to flow. The regions presenting risk for VF induction or for cardiac capture were well away from the typical heart depth.
Comparison of methods for assessing photoprotection against ultraviolet A in vivo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaidbey, K.; Gange, R.W.
Photoprotection against ultraviolet A (UVA) by three sunscreens was evaluated in humans, with erythema and pigmentation used as end points in normal skin and in skin sensitized with 8-methoxypsoralen and anthracene. The test sunscreens were Parsol 1789 (2%), Eusolex 8020 (2%), and oxybenzone (3%). UVA was obtained from two filtered xenon-arc sources. UVA protection factors were found to be significantly higher in sensitized skin compared with normal skin. Both Parsol and Eusolex provided better and comparable photoprotection (approximately 3.0) than oxybenzone (approximately 2.0) in sensitized skin, regardless of whether 8-methoxypsoralen or anthracene was used. In normal unsensitized skin, Parsol 1789more » and Eusolex 8020 were also comparable and provided slightly better photoprotection (approximately 1.8) than oxybenzone (approximately 1.4) when pigmentation was used as an end point. The three sunscreens, however, were similar in providing photoprotection against UVA-induced erythema. Protection factors obtained in artificially sensitized skin are probably not relevant to normal skin. It is concluded that pigmentation, either immediate or delayed, is a reproducible and useful end point for the routine assessment of photoprotection of normal skin against UVA.« less
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Brain Entropy Mapping Using fMRI
Wang, Ze; Li, Yin; Childress, Anna Rose; Detre, John A.
2014-01-01
Entropy is an important trait for life as well as the human brain. Characterizing brain entropy (BEN) may provide an informative tool to assess brain states and brain functions. Yet little is known about the distribution and regional organization of BEN in normal brain. The purpose of this study was to examine the whole brain entropy patterns using a large cohort of normal subjects. A series of experiments were first performed to validate an approximate entropy measure regarding its sensitivity, specificity, and reliability using synthetic data and fMRI data. Resting state fMRI data from a large cohort of normal subjects (n = 1049) from multi-sites were then used to derive a 3-dimensional BEN map, showing a sharp low-high entropy contrast between the neocortex and the rest of brain. The spatial heterogeneity of resting BEN was further studied using a data-driven clustering method, and the entire brain was found to be organized into 7 hierarchical regional BEN networks that are consistent with known structural and functional brain parcellations. These findings suggest BEN mapping as a physiologically and functionally meaningful measure for studying brain functions. PMID:24657999
Atomisation and droplet formation mechanisms in a model two-phase mixing layer
NASA Astrophysics Data System (ADS)
Zaleski, Stephane; Ling, Yue; Fuster, Daniel; Tryggvason, Gretar
2017-11-01
We study atomization in a turbulent two-phase mixing layer inspired by the Grenoble air-water experiments. A planar gas jet of large velocity is emitted on top of a planar liquid jet of smaller velocity. The density ratio and momentum ratios are both set at 20 in the numerical simulation in order to ease the simulation. We use a Volume-Of-Fluid method with good parallelisation properties, implemented in our code http://parissimulator.sf.net. Our simulations show two distinct droplet formation mechanisms, one in which thin liquid sheets are punctured to form rapidly expanding holes and the other in which ligaments of irregular shape form and breakup in a manner similar but not identical to jets in Rayleigh-Plateau-Savart instabilities. Observed distributions of particle sizes are extracted for a sequence of ever more refined grids, the largest grid containing approximately eight billion points. Although their accuracy is limited at small sizes by the grid resolution and at large size by statistical effects, the distributions overlap in the central region. The observed distributions are much closer to log normal distributions than to gamma distributions as is also the case for experiments.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data
Robertson, Henry T.; Allison, David B.
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2011-01-01
This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
Closed-form solutions of performability. [in computer systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1982-01-01
It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.
Elastic electron scattering from formamide
NASA Astrophysics Data System (ADS)
Buk, M. V.; Bardela, F. P.; da Silva, L. A.; Iga, I.; Homem, M. G. P.
2018-05-01
Differential cross sections for elastic electron scattering by formamide (NH2CHO) were measured in the 30–800 eV and 10°–120° ranges. The angular distribution of scattered electrons was obtained using a crossed electron beam-molecular beam geometry. The relative flow technique was applied to normalize our data. Integral and momentum-transfer cross sections were derived from the measured differential cross sections. Theoretical results in the framework of the independent-atom model at the static-exchange-polarization plus absorption level of approximation are also given. The present measured and calculated results are compared with those available in the literature showing a generally good agreement.
Self-diffusion in a stochastically heated two-dimensional dusty plasma
NASA Astrophysics Data System (ADS)
Sheridan, T. E.
2016-09-01
Diffusion in a two-dimensional dusty plasma liquid (i.e., a Yukawa liquid) is studied experimentally. The dusty plasma liquid is heated stochastically by a surrounding three-dimensional toroidal dusty plasma gas which acts as a thermal reservoir. The measured dust velocity distribution functions are isotropic Maxwellians, giving a well-defined kinetic temperature. The mean-square displacement for dust particles is found to increase linearly with time, indicating normal diffusion. The measured diffusion coefficients increase approximately linearly with temperature. The effective collision rate is dominated by collective dust-dust interactions rather than neutral gas drag, and is comparable to the dusty-plasma frequency.
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
NASA Astrophysics Data System (ADS)
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
Hot gas in the cold dark matter scenario: X-ray clusters from a high-resolution numerical simulation
NASA Technical Reports Server (NTRS)
Kang, Hyesung; Cen, Renyue; Ostriker, Jeremiah P.; Ryu, Dongsu
1994-01-01
A new, three-dimensional, shock-capturing hydrodynamic code is utilized to determine the distribution of hot gas in a standard cold dark matter (CDM) model of the universe. Periodic boundary conditions are assumed: a box with size 85 h(exp -1) Mpc having cell size 0.31 h(exp -1) Mpc is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, sigma(sub 8) = 1.05, omega(sub b) = 0.06, and assuming h = 0.5, we find the X-ray-emitting clusters and compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. We find that most of the total X-ray emissivity in our box originates in a relatively small number of identifiable clusters which occupy approximately 10(exp -3) of the box volume. This standard CDM model, normalized to COBE, produces approximately 5 times too much emission from clusters having L(sub x) is greater than 10(exp 43) ergs/s, a not-unexpected result. If all other parameters were unchanged, we would expect adequate agreement for sigma(sub 8) = 0.6. This provides a new and independent argument for lower small-scale power than standard CDM at the 8 h(exp -1) Mpc scale. The background radiation field at 1 keV due to clusters in this model is approximately one-third of the observed background, which, after correction for numerical effects, again indicates approximately 5 times too much emission and the appropriateness of sigma(sub 8) = 0.6. If we have used the observed ratio of gas to total mass in clusters, rather than basing the mean density on light-element nucleosynthesis, then the computed luminosity of each cluster would have increased still further, by a factor of approximately 10. The number density of clusters increases to z approximately 1, but the luminosity per typical cluster decreases, with the result that evolution in the number density of bright clusters is moderate in this redshift range, showing a broad peak near z = 0.7, and then a rapid decline above redshift z = 3. Detailed computations of the luminosity functions in the range L(sub x) = 10(exp 40) - 10(exp 44) ergs/s in various energy bands are presented for both cluster central regions and total luminosities to be used in comparison with ROSAT and other observational data sets. The quantitative results found disagree significantly with those found by other investigators using semianalytic techniques. We find little dependence of core radius on cluster luminosity and a dependence of temperature on luminosity given by log kT(sub x) = A + B log L(sub x), which is slightly steeper (B = 0.38) than is indicated by observations. Computed temperatures are somewhat higher than observed, as expected, in that COBE-normalized CDM has too much power on the relevant scales. A modest average temperature gradient is found, with temperatures dropping to 90% of central values at 0.4 h(exp -1) Mpc and 70% of central values at 0.9 h(exp -1) Mpc. Examining the ratio of gas to total mass in the clusters normalized to Omega(sub B) h(exp 2) = 0.015, and comparing with observations, we conclude, in agreement with White (1991), that the cluster observations argue for an open universe.
NASA Astrophysics Data System (ADS)
Å PičáK, Aleš; Hanuš, VáClav; VaněK, JiřÃ.; BěHounková, Marie
2007-09-01
Relocated Engdahl et al. (1998) global seismological data for 10 aftershock sequences were used to analyze the internal tectonic structure of the Central American subduction zone; the main shocks of several of these were the most destructive and often referenced earthquakes in the region (e.g., the 1970 Chiapas, 1983 Osa, 1992 Nicaragua, 1999 Quepos, 2001 El Salvador earthquakes). The spatial analysis of aftershock foci distribution was performed in a rotated Cartesian coordinate system (x, y, z) related to the Wadati-Benioff zone, and not in a standard coordinate system (ϕ, λ, h are latitude, longitude, focal depth, respectively). Available fault plane solutions were also transformed into the plane approximating the Wadati-Benioff zone. The spatial distribution of earthquakes in each aftershock sequence was modeled as either a plane fit using a least squares approximation or a volume fit with a minimum thickness rectangular box. The analysis points to a quasi-planar distribution of earthquake foci in all aftershock sequences, manifesting the appurtenance of aftershocks to fracture zones. Geometrical parameters of fracture zones (strike, dip, and dimensions) hosting individual sequences were calculated and compared with the seafloor morphology of the Cocos Plate. The smooth character of the seafloor correlates with the aftershock fracture zones oriented parallel to the trench and commonly subparallel to the subducting slab, whereas subduction of the Cocos Ridge and seamounts around the Quepos Plateau coincides with steeply dipping fracture zones. Transformed focal mechanisms are almost exclusively (>90%) of normal character.
NASA Astrophysics Data System (ADS)
Špičák, Aleš; Hanuš, Václav; Vaněk, Jiří; Běhounková, Marie
2007-09-01
Relocated Engdahl et al. (1998) global seismological data for 10 aftershock sequences were used to analyze the internal tectonic structure of the Central American subduction zone; the main shocks of several of these were the most destructive and often referenced earthquakes in the region (e.g., the 1970 Chiapas, 1983 Osa, 1992 Nicaragua, 1999 Quepos, 2001 El Salvador earthquakes). The spatial analysis of aftershock foci distribution was performed in a rotated Cartesian coordinate system (x, y, z) related to the Wadati-Benioff zone, and not in a standard coordinate system ($\\varphi$, λ, h are latitude, longitude, focal depth, respectively). Available fault plane solutions were also transformed into the plane approximating the Wadati-Benioff zone. The spatial distribution of earthquakes in each aftershock sequence was modeled as either a plane fit using a least squares approximation or a volume fit with a minimum thickness rectangular box. The analysis points to a quasi-planar distribution of earthquake foci in all aftershock sequences, manifesting the appurtenance of aftershocks to fracture zones. Geometrical parameters of fracture zones (strike, dip, and dimensions) hosting individual sequences were calculated and compared with the seafloor morphology of the Cocos Plate. The smooth character of the seafloor correlates with the aftershock fracture zones oriented parallel to the trench and commonly subparallel to the subducting slab, whereas subduction of the Cocos Ridge and seamounts around the Quepos Plateau coincides with steeply dipping fracture zones. Transformed focal mechanisms are almost exclusively (>90%) of normal character.
Whittington, J; Holland, A; Webb, T
2009-05-01
Genetic disorders occasionally provide the means to uncover potential mechanisms linking gene expression and physical or cognitive characteristics or behaviour. Prader-Willi syndrome (PWS) is one such genetic disorder in which differences between the two main genetic subtypes have been documented (e.g. higher verbal IQ in one vs. higher performance IQ in the other; slower than normal reaction time in one vs. normal in the other). In a population study of PWS, the IQ distribution of people with PWS was approximately normal. This raises the question of whether this distribution arose from a systematic effect of PWS on IQ (hypothesis 1) or whether it was the fortuitous result of random effects (hypothesis 2). The correlation between PWS and sibling IQ was determined in order to discriminate between the two hypotheses. In the first case we would expect the correlation to be similar to that found in the general population (0.5); in the second case it would be zero. It was found that the overall PWS-sibling IQ correlation was 0.3 but that the two main genetic subtypes of PWS differed in their familial IQ relationships. As expected, the IQs of normal siblings correlated 0.5, and this was also the case with one genetic subtype of PWS (uniparental disomy) and their siblings, while the other subtype IQ correlated -0.07 with sibling IQ. This is a potentially powerful result that gives another clue to the role of genes on chromosome 15 in the determination of IQ. It is another systematic difference between the genetic subtypes of PWS, which needs an explanation in terms of the very small genetic differences between them.
Response time accuracy in Apple Macintosh computers.
Neath, Ian; Earle, Avery; Hallett, Darcy; Surprenant, Aimée M
2011-06-01
The accuracy and variability of response times (RTs) collected on stock Apple Macintosh computers using USB keyboards was assessed. A photodiode detected a change in the screen's luminosity and triggered a solenoid that pressed a key on the keyboard. The RTs collected in this way were reliable, but could be as much as 100 ms too long. The standard deviation of the measured RTs varied between 2.5 and 10 ms, and the distributions approximated a normal distribution. Surprisingly, two recent Apple-branded USB keyboards differed in their accuracy by as much as 20 ms. The most accurate RTs were collected when an external CRT was used to display the stimuli and Psychtoolbox was able to synchronize presentation with the screen refresh. We conclude that RTs collected on stock iMacs can detect a difference as small as 5-10 ms under realistic conditions, and this dictates which types of research should or should not use these systems.
Carbon distribution profiles in lunar fines
NASA Technical Reports Server (NTRS)
Hart, R. K.
1977-01-01
Radial distribution profiles of elemental carbon in lunar soils consisting of particles in the size range of 50 to 150 microns were investigated. Initial experiments on specimen preparation and the analysis of prepared specimens by Auger electron spectrometry (AES) and scanning electron microscopy (SEM) are described. Results from splits of samples 61501,84 and 64421,11, which were mounted various ways in several specimen holders, are presented. A low carbon content was observed in AES spectra from soil particles that were subjected to sputter-ion cleaning with 960eV argon ions for periods of time up to a total exposure for one hour. This ion charge was sufficient to remove approximately 70 nm of material from the surface. All of the physically adsorbed carbon (as well as water vapor, etc.) would normally be removed in the first few minutes, leaving only carbon in the specimen, and metal support structure, to be detected thereafter.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
NASA Technical Reports Server (NTRS)
Otterman, J.; Brakke, T.
1986-01-01
The projections of leaf areas onto a horizontal plane and onto a vertical plane are examined for their utility in characterizing canopies for sunlight penetration (direct beam only) models. These projections exactly specify the penetration if the projections on the principal plane of the normals to the top surfaces of the leaves are in the same quadrant as the sun. Inferring the total leaf area from these projections (and therefore the penetration as a function of the total leaf area) is possible only with a large uncertainty (up to + or - 32 percent) because the projections are a specific measure of the total leaf area only if the leaf angle distribution is known. It is expected that this uncertainty could be reduced to more acceptable levels by making an approximate assessment of whether the zenith angle distribution is that of an extremophile canopy.
Mean-field kinetic theory approach to evaporation of a binary liquid into vacuum
NASA Astrophysics Data System (ADS)
Frezzotti, A.; Gibelli, L.; Lockerby, D. A.; Sprittles, J. E.
2018-05-01
Evaporation of a binary liquid into near-vacuum conditions has been studied using numerical solutions of a system of two coupled Enskog-Vlasov equations. Liquid-vapor coexistence curves have been mapped out for different liquid compositions. The evaporation process has been investigated at a range of liquid temperatures sufficiently lower than the critical one for the vapor not to significantly deviate from the ideal behavior. It is found that the shape of the distribution functions of evaporating atoms is well approximated by an anisotropic Maxwellian distribution with different characteristic temperatures for velocity components normal and parallel to the liquid-vapor interface. The anisotropy reduces as the evaporation temperature decreases. Evaporation coefficients are computed based on the separation temperature and the maximum concentration of the less volatile component close to the liquid-vapor interface. This choice leads to values which are almost constant in the simulation conditions.
Method of moments for the dilute granular flow of inelastic spheres
NASA Astrophysics Data System (ADS)
Strumendo, Matteo; Canu, Paolo
2002-10-01
Some peculiar features of granular materials (smooth, identical spheres) in rapid flow are the normal pressure differences and the related anisotropy of the velocity distribution function f(1). Kinetic theories have been proposed that account for the anisotropy, mostly based on a generalization of the Chapman-Enskog expansion [N. Sela and I. Goldhirsch, J. Fluid Mech. 361, 41 (1998)]. In the present paper, we approach the problem differently by means of the method of moments; previously, similar theories have been constructed for the nearly elastic behavior of granular matter but were not able to predict the normal pressures differences. To overcome these restrictions, we use as an approximation of the f(1) a truncated series expansion in Hermite polynomials around the Maxwellian distribution function. We used the approximated f(1) to evaluate the collisional source term and calculated all the resulting integrals; also, the difference in the mean velocity of the two colliding particles has been taken into account. To simulate the granular flows, all the second-order moment balances are considered together with the mass and momentum balances. In balance equations of the Nth-order moments, the (N+1)th-order moments (and their derivatives) appear: we therefore introduced closure equations to express them as functions of lower-order moments by a generalization of the ``elementary kinetic theory,'' instead of the classical procedure of neglecting the (N+1)th-order moments and their derivatives. We applied the model to the translational flow on an inclined chute obtaining the profiles of the solid volumetric fraction, the mean velocity, and all the second-order moments. The theoretical results have been compared with experimental data [E. Azanza, F. Chevoir, and P. Moucheront, J. Fluid Mech. 400, 199 (1999); T. G. Drake, J. Fluid Mech. 225, 121 (1991)] and all the features of the flow are reflected by the model: the decreasing exponential profile of the solid volumetric fraction, the parabolic shape of the mean velocity, the constancy of the granular temperature and of its components. Besides, the model predicts the normal pressures differences, typical of the granular materials.
Quasi-linear diffusion coefficients for highly oblique whistler mode waves
NASA Astrophysics Data System (ADS)
Albert, J. M.
2017-05-01
Quasi-linear diffusion coefficients are considered for highly oblique whistler mode waves, which exhibit a singular "resonance cone" in cold plasma theory. The refractive index becomes both very large and rapidly varying as a function of wave parameters, making the diffusion coefficients difficult to calculate and to characterize. Since such waves have been repeatedly observed both outside and inside the plasmasphere, this problem has received renewed attention. Here the diffusion equations are analytically treated in the limit of large refractive index μ. It is shown that a common approximation to the refractive index allows the associated "normalization integral" to be evaluated in closed form and that this can be exploited in the numerical evaluation of the exact expression. The overall diffusion coefficient formulas for large μ are then reduced to a very simple form, and the remaining integral and sum over resonances are approximated analytically. These formulas are typically written for a modeled distribution of wave magnetic field intensity, but this may not be appropriate for highly oblique whistlers, which become quasi-electrostatic. Thus, the analysis is also presented in terms of wave electric field intensity. The final results depend strongly on the maximum μ (or μ∥) used to model the wave distribution, so realistic determination of these limiting values becomes paramount.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σ c(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-14
...-high, containing approximately 100 pump/turbine/ generator units having a total installed capacity of...,400 acre- feet at normal water surface elevation of +1,000 feet Project Datum; (2) a lower reservoir..., with a surface area of about 50 acres and volume of approximately 2,400 acre-feet at normal water...
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
Statistics of baryon correlation functions in lattice QCD
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration
2017-12-01
A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.
Prospective treatment planning to improve locoregional hyperthermia for oesophageal cancer.
Kok, H P; van Haaren, P M A; van de Kamer, J B; Zum Vörde Sive Vörding, P J; Wiersma, J; Hulshof, M C C M; Geijsen, E D; van Lanschot, J J B; Crezee, J
2006-08-01
In the Academic Medical Center (AMC) Amsterdam, locoregional hyperthermia for oesophageal tumours is applied using the 70 MHz AMC-4 phased array system. Due to the occurrence of treatment-limiting hot spots in normal tissue and systemic stress at high power, the thermal dose achieved in the tumour can be sub-optimal. The large number of degrees of freedom of the heating device, i.e. the amplitudes and phases of the antennae, makes it difficult to avoid treatment-limiting hot spots by intuitive amplitude/phase steering. Prospective hyperthermia treatment planning combined with high resolution temperature-based optimization was applied to improve hyperthermia treatment of patients with oesophageal cancer. All hyperthermia treatments were performed with 'standard' clinical settings. Temperatures were measured systemically, at the location of the tumour and near the spinal cord, which is an organ at risk. For 16 patients numerically optimized settings were obtained from treatment planning with temperature-based optimization. Steady state tumour temperatures were maximized, subject to constraints to normal tissue temperatures. At the start of 48 hyperthermia treatments in these 16 patients temperature rise (DeltaT) measurements were performed by applying a short power pulse with the numerically optimized amplitude/phase settings, with the clinical settings and with mixed settings, i.e. numerically optimized amplitudes combined with clinical phases. The heating efficiency of the three settings was determined by the measured DeltaT values and the DeltaT-ratio between the DeltaT in the tumour (DeltaToes) and near the spinal cord (DeltaTcord). For a single patient the steady state temperature distribution was computed retrospectively for all three settings, since the temperature distributions may be quite different. To illustrate that the choice of the optimization strategy is decisive for the obtained settings, a numerical optimization on DeltaT-ratio was performed for this patient and the steady state temperature distribution for the obtained settings was computed. A higher DeltaToes was measured with the mixed settings compared to the calculated and clinical settings; DeltaTcord was higher with the mixed settings compared to the clinical settings. The DeltaT-ratio was approximately 1.5 for all three settings. These results indicate that the most effective tumour heating can be achieved with the mixed settings. DeltaT is proportional to the Specific Absorption Rate (SAR) and a higher SAR results in a higher steady state temperature, which implies that mixed settings are likely to provide the most effective heating at steady state as well. The steady state temperature distributions for the clinical and mixed settings, computed for the single patient, showed some locations where temperatures exceeded the normal tissue constraints used in the optimization. This demonstrates that the numerical optimization did not prescribe the mixed settings, because it had to comply with the constraints set to the normal tissue temperatures. However, the predicted hot spots are not necessarily clinically relevant. Numerical optimization on DeltaT-ratio for this patient yielded a very high DeltaT-ratio ( approximately 380), albeit at the cost of excessive heating of normal tissue and lower steady state tumour temperatures compared to the conventional optimization. Treatment planning can be valuable to improve hyperthermia treatments. A thorough discussion on clinically relevant objectives and constraints is essential.
Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J
Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (p< 0.0001), showing the MUNE a better correlation with age than CMAP amplitude ( 0.5002 and 0.4142, respectively p< 0.0001). Statistical MUNE method is an important way for the assessment to the phisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.
Selenite sorption by carbonate substituted apatite
Moore, Robert C.; Rigali, Mark J.; Brady, Patrick
2016-08-31
The sorption of selenite, SeO 3 2–, by carbonate substituted hydroxylapatite was investigated using batch kinetic and equilibrium experiments. The carbonate substituted hydroxylapatite was prepared by a precipitation method and characterized by SEM, XRD, FT-IR, TGA, BET and solubility measurements. The material is poorly crystalline, contains approximately 9.4% carbonate by weight and has a surface area of 210.2 m 2/g. Uptake of selenite by the carbonated hydroxylapatite was approximately an order of magnitude higher than the uptake by uncarbonated hydroxylapatite reported in the literature. Distribution coefficients, K d, determined for the carbonated apatite in this work ranged from approximately 4200more » to over 14,000 L/kg. A comparison of the results from kinetic experiments performed in this work and literature kinetic data indicates the carbonated apatite synthesized in this study sorbed selenite 23 times faster than uncarbonated hydroxylapatite based on values normalized to the surface area of each material. Furthermore, the results indicate carbonated apatite is a potential candidate for use as a sorbent for pump-and-treat technologies, soil amendments or for use in permeable reactive barriers for the remediation of selenium contaminated sediments and groundwaters.« less
NASA Astrophysics Data System (ADS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2014-11-01
Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).
Wren, Jonathan D; Conway, Tyrrell
2006-01-01
The goals of this study were to gain a better quantitative understanding of the dynamic range of transcriptional and translational response observed in biological systems and to examine the reporting of regulatory events for trends and biases. A straightforward pattern-matching routine extracted 3,408 independent observations regarding transcriptional fold-changes and 1,125 regarding translational fold-changes from over 15 million MEDLINE abstracts. Approximately 95% of reported changes were > or =2-fold. Further, the historical trend of reporting individual fold-changes is declining in favor of high-throughput methods for transcription but not translation. Where it was possible to compare the average fold-changes in transcription and translation for the same gene/product (203 examples), approximately 53% were a < or =2-fold difference, suggesting a loose tendency for the two to be coupled in magnitude. We found also that approximately three-fourths of reported regulatory events have been at the transcriptional level. The frequency distribution appears to be normally distributed and peaks near 2-fold, suggesting that nature selects for a low-energy solution to regulatory responses. Because high-throughput technologies ordinarily sacrifice measurement quality for quantity, this also suggests that many regulatory events may not be reliably detectable by such technologies. Text mining of regulatory events and responses provides additional information incorporable into microarray analysis, such as prior fold-change observations and flagging genes that are regulated post-transcription. All extracted regulation and response patterns can be downloaded at the following website: www.ou.edu/microarray/ oumcf/Meta_analysis.xls.
Distribution entropy analysis of epileptic EEG signals.
Li, Peng; Yan, Chang; Karmakar, Chandan; Liu, Changchun
2015-01-01
It is an open-ended challenge to accurately detect the epileptic seizures through electroencephalogram (EEG) signals. Recently published studies have made elaborate attempts to distinguish between the normal and epileptic EEG signals by advanced nonlinear entropy methods, such as the approximate entropy, sample entropy, fuzzy entropy, and permutation entropy, etc. Most recently, a novel distribution entropy (DistEn) has been reported to have superior performance compared with the conventional entropy methods for especially short length data. We thus aimed, in the present study, to show the potential of DistEn in the analysis of epileptic EEG signals. The publicly-accessible Bonn database which consisted of normal, interictal, and ictal EEG signals was used in this study. Three different measurement protocols were set for better understanding the performance of DistEn, which are: i) calculate the DistEn of a specific EEG signal using the full recording; ii) calculate the DistEn by averaging the results for all its possible non-overlapped 5 second segments; and iii) calculate it by averaging the DistEn values for all the possible non-overlapped segments of 1 second length, respectively. Results for all three protocols indicated a statistically significantly increased DistEn for the ictal class compared with both the normal and interictal classes. Besides, the results obtained under the third protocol, which only used very short segments (1 s) of EEG recordings showed a significantly (p <; 0.05) increased DistEn for the interictal class in compassion with the normal class, whereas both analyses using relatively long EEG signals failed in tracking this difference between them, which may be due to a nonstationarity effect on entropy algorithm. The capability of discriminating between the normal and interictal EEG signals is of great clinical relevance since it may provide helpful tools for the detection of a seizure onset. Therefore, our study suggests that the DistEn analysis of EEG signals is very promising for clinical and even portable EEG monitoring.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Astrophysics Data System (ADS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-08-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
The evolution of cooperation on geographical networks
NASA Astrophysics Data System (ADS)
Li, Yixiao; Wang, Yi; Sheng, Jichuan
2017-11-01
We study evolutionary public goods game on geographical networks, i.e., complex networks which are located on a geographical plane. The geographical feature effects in two ways: In one way, the geographically-induced network structure influences the overall evolutionary dynamics, and, in the other way, the geographical length of an edge influences the cost when the two players at the two ends interact. For the latter effect, we design a new cost function of cooperators, which simply assumes that the longer the distance between two players, the higher cost the cooperator(s) of them have to pay. In this study, network substrates are generated by a previous spatial network model with a cost-benefit parameter controlling the network topology. Our simulations show that the greatest promotion of cooperation is achieved in the intermediate regime of the parameter, in which empirical estimates of various railway networks fall. Further, we investigate how the distribution of edges' geographical costs influences the evolutionary dynamics and consider three patterns of the distribution: an approximately-equal distribution, a diverse distribution, and a polarized distribution. For normal geographical networks which are generated using intermediate values of the cost-benefit parameter, a diverse distribution hinders the evolution of cooperation, whereas a polarized distribution lowers the threshold value of the amplification factor for cooperation in public goods game. These results are helpful for understanding the evolution of cooperation on real-world geographical networks.
Aldega, L.; Eberl, D.D.
2005-01-01
Illite crystals in siliciclastic sediments are heterogeneous assemblages of detrital material coming from various source rocks and, at paleotemperatures >70 ??C, of superimposed diagenetic modification in the parent sediment. We distinguished the relative proportions of 2M1 detrital illite and possible diagenetic 1Md + 1M illite by a combined analysis of crystal-size distribution and illite polytype quantification. We found that the proportions of 1Md + 1M and 2M1 illite could be determined from crystallite thickness measurements (BWA method, using the MudMaster program) by unmixing measured crystallite thickness distributions using theoretical and calculated log-normal and/or asymptotic distributions. The end-member components that we used to unmix the measured distributions were three asymptotic-shaped distributions (assumed to be the diagenetic component of the mixture, the 1Md + 1M polytypes) calculated using the Galoper program (Phase A was simulated using 500 crystals per cycle of nucleation and growth, Phase B = 333/cycle, and Phase C = 250/ cycle), and one theoretical log-normal distribution (Phase D, assumed to approximate the detrital 2M1 component of the mixture). In addition, quantitative polytype analysis was carried out using the RockJock software for comparison. The two techniques gave comparable results (r2 = 0.93), which indicates that the unmixing method permits one to calculate the proportion of illite polytypes and, therefore, the proportion of 2M1 detrital illite, from crystallite thickness measurements. The overall illite crystallite thicknesses in the samples were found to be a function of the relative proportions of thick 2M1 and thin 1Md + 1M illite. The percentage of illite layers in I-S mixed layers correlates with the mean crystallite thickness of the 1Md + 1M polytypes, indicating that these polytypes, rather than the 2M1 polytype, participate in I-S mixed layering.
ERIC Educational Resources Information Center
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
NASA Astrophysics Data System (ADS)
Diveyev, Bohdan; Konyk, Solomija; Crocker, Malcolm J.
2018-01-01
The main aim of this study is to predict the elastic and damping properties of composite laminated plates. This problem has an exact elasticity solution for simple uniform bending and transverse loading conditions. This paper presents a new stress analysis method for the accurate determination of the detailed stress distributions in laminated plates subjected to cylindrical bending. Some approximate methods for the stress state predictions for laminated plates are presented here. The present method is adaptive and does not rely on strong assumptions about the model of the plate. The theoretical model described here incorporates deformations of each sheet of the lamina, which account for the effects of transverse shear deformation, transverse normal strain-stress and nonlinear variation of displacements with respect to the thickness coordinate. Predictions of the dynamic and damping values of laminated plates for various geometrical, mechanical and fastening properties are presented. Comparison with the Timoshenko beam theory is systematically made for analytical and approximation variants.
NASA Astrophysics Data System (ADS)
Zarlenga, A.; Janković, I.; Fiori, A.; Dagan, G.
2018-03-01
Uniform mean flow takes place in a 3-D heterogeneous formation of normal hydraulic logconductivity Y=lnK. The aim of the study is to derive the dependence of the horizontal Kefh and vertical Kefv effective conductivities on the structural parameters of hydraulic conductivity and investigate the impact of departure from multi-Gaussianity on Kef, by numerical simulations of flow in formations that share the same pdf and covariance of Y but differ in the connectivity of classes of Y. The main result is that for the extreme models of connected and disconnected high Y zones the ratio between the effective conductivities in isotropic media is much smaller than in 2-D. The dependence of Kefh and Kefv upon the logconductivity variance and the anisotropy ratio is compared with existing approximations (first-order, Landau-Matheron conjecture, self-consistent approximation). Besides the theoretical interest, the results offer a basis for empirical relationships to be used in applications.
Anharmonic vibrational spectra and mode-mode couplings analysis of 2-aminopyridine
NASA Astrophysics Data System (ADS)
Faizan, Mohd; Alam, Mohammad Jane; Afroz, Ziya; Bhat, Sheeraz Ahmad; Ahmad, Shabbir
2018-01-01
Vibrational spectra of 2-aminopyridine (2AP) have been analyzed using the vibrational self-consistence field theory (VSCF), correlated corrected vibrational self-consistence field theory (CC-VSCF) and vibrational perturbation theory (VPT2) at B3LYP/6-311G(d,p) framework. The mode-mode couplings affect the vibrational frequencies and intensities. The coupling integrals between pairs of normal modes have been obtained on the basis of quartic force field (2MR-QFF) approximation. The overtone and combination bands are also assigned in the FTIR spectrum with the help of anharmonic calculation at VPT2 method. A statistical analysis of deviations shows that estimated anharmonic frequencies are closer to the experiment over harmonic approximation. Furthermore, the anharmonic correction has also been carried out for the dimeric structure of 2AP. The fundamental vibration bands have been assigned on the basis of potential energy distribution (PED) and visual look over the animated modes. Other important molecular properties such as frontier molecular orbitals and molecular electrostatics potential mapping have also been analyzed.
NASA Astrophysics Data System (ADS)
Burguet, M.
2012-04-01
M. Burguet (1), E.V. Taguas(2), J.A. Gómez(1) (1)Institute for Sustainable Agriculture (IAS-CSIC).Av. Menéndez Pidal s/n Campus Alameda del Obispo Apartado 4084. 14080 Córdoba. (2)Department of Rural Engineering, University of Córdoba. 14014 Córdoba. Olive groves located in mountainous areas with steep slopes in the south of Spain, have been identified as a major source of sediments in the region, contributing to diffuse pollution of surface water and causing major damage to roads and reservoirs. This study has as objective the evaluation of different calibration approaches of a water erosion distributed model in a 6.7 ha watershed of olive groves, with soil management based on tillage and herbicide in Setenil (Cadiz). The model chosen was SEDD (Ferro and Porto, 2000), which was calibrated using data from rainfall, runoff and soil erosion measured in the same basin in a series of five years, following the original methodology proposed by its creators. It was compared with the modelling approach presented by Taguas et al. (2011), which considers the possibility of binomial distribution of its main parameter coefficient β. In both cases the calibration of the model assumes a constant C value which is not the case in olive orchards (Gómez et al., 2003). In a second stage, the calibration of the model was repeated using a variable C depending on the ground cover and soil moisture evolution along the season. The results indicate that the coefficient β determines the travel time within each sub-basin is a distribution that is far from the normal distribution suggested by Ferro and Porto (2000). This is a similar result to that obtained by Taguas et al. (2011) in another basin of olive groves. In this case the explanation for this deviation from a normal distribution of key parameters of the model β cannot be the evolution of the coverage. It also reflects little predictive power because of the inability of it to capture two major events that caused the greatest erosion of soil loss measured in the 97 events. These results suggest that progress must be made in the calibration of the model, based on different estimates of β characteristic of the basin that is not dependent on an approximation of its distribution to a normal distribution, and including the impact of soil management along the season.
ENDF/B-VII.0 Data Testing Using 1,172 Critical Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechaty, E F; Cullen, D E
2007-10-01
In order to test the ENDF/B-VII.0 neutron data library [1], 1,172 critical assemblies from [2] have been calculated using the Monte Carlo transport code TART [3]. TART's 'best' physics was used for all of these calculations; this included continuous energy cross sections, delayed neutrons in their spectrum that is slower than prompt neutrons, unresolved resonance region self-shielding, the thermal scattering (free atom for all materials plus thermal scattering law data S({alpha},{beta}) when available). In this first pass through the assemblies the objective was to 'quickly' test the validity of the ENDF/B-VII.0 data [1], the assembly models as defined in [2]more » and coded for use with TART, and TART's physics treatment [3] of these assemblies. With TART we have the option of running criticality problems until K-eff has been calculated to an acceptable input accuracy. In order to 'quickly' calculate all of these assemblies K-eff was calculated in each case to +/- 0.002. For these calculations the assemblies were divided into ten types based on fuel (mixed, Pu239, U233, U235) and median fission energy (Fast, Midi, Slow). A table is provided that shows a summary of these results. This is followed be details for every assembly, and statistical information about the distribution of K-eff for each type of assembly. After a review of these results to eliminate any obvious errors in ENDF/B data, assembly models, or TART physics, all assemblies will be run again to a higher precision. Only after this second run is finished will we have highly precise results. Until then the results presently here should only be interpreted as approximate values of K-eff with a standard deviation of +/- 0.002; for such a large number of assemblies we expected the results to be approximately normal, with a spread out to several times the standard deviation; see the calculated statistical distributions and their comparisons to a normal distribution.« less
NASA Astrophysics Data System (ADS)
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; König, A.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Strauss, J.; Waltenberger, W.; Wulz, C.-E.; Dvornikov, O.; Makarenko, V.; Mossolov, V.; Suarez Gonzalez, J.; Zykunov, V.; Shumeiko, N.; Alderweireldt, S.; De Wolf, E. A.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Lowette, S.; Moortgat, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Salva, S.; Schöfbeck, R.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Chagas, E. Belchior Batista Das; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; De Oliveira Martins, C.; De Souza, S. Fonseca; Guativa, L. M. Huertas; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Cheng, T.; Jiang, C. H.; Leggat, D.; Liu, Z.; Romeo, F.; Ruan, M.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Abdulsalam, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Miné, P.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sirois, Y.; Stahl Leiton, A. G.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Bihan, A.-C. Le; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fay, J.; Finco, L.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Khvedelidze, A.; Lomidze, D.; Autermann, C.; Beranek, S.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Verlage, T.; Albert, A.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bin Anuar, A. A.; Borras, K.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wissing, C.; Zenaiev, O.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hoffmann, M.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Lapsien, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baur, S.; Baus, C.; Berger, J.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Fink, S.; Freund, B.; Friese, R.; Giffels, M.; Gilbert, A.; Goldenzweig, P.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Katkov, I.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Kousouris, K.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Filipovic, N.; Pasztor, G.; Bencze, G.; Hajdu, C.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Komaragiri, J. R.; Bahinipati, S.; Bhowmik, S.; Choudhury, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, R.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Kole, G.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sur, N.; Sutar, B.; Banerjee, S.; Dewanjee, R. K.; Ganguly, S.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Nardo, G.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Antunes De Oliveira, A. Carvalho; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, U.; Gonella, F.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Torassa, E.; Ventura, S.; Zanetti, M.; Zotto, P.; Braghieri, A.; Fallavollita, F.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Mantovani, G.; Mariani, V.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Del Re, D.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, J.; Lee, S.; Lee, S. W.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Brochero Cifuentes, J. A.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Lee, H.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. H.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Lopez-Fernandez, R.; Magaña Villalba, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Carpinteyro, S.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Pyskir, A.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Calpas, B.; Di Francesco, A.; Faccioli, P.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Chtchipounov, L.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Murzin, V.; Oreshkin, V.; Sulimov, V.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Danilov, M.; Popova, E.; Rusinov, V.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Klyukhin, V.; Korneeva, N.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Perfilov, M.; Savrin, V.; Volkov, P.; Blinov, V.; Skovpen, Y.; Shtol, D.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Suárez Andrés, I.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Curras, E.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Baillon, P.; Ball, A. H.; Barney, D.; Bloch, P.; Bocci, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chen, Y.; Cimmino, A.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Di Marco, E.; Dobson, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fartoukh, S.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Girone, M.; Glege, F.; Gulhan, D.; Gundacker, S.; Guthoff, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Morovic, S.; Mulders, M.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Sauvan, J. B.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Sphicas, P.; Steggemann, J.; Stoye, M.; Takahashi, Y.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Veres, G. I.; Verweij, M.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; De Cosa, A.; Donato, S.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Seitz, C.; Yang, Y.; Zucchetta, A.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Paganis, E.; Psallidas, A.; Tsai, J. F.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Boran, F.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Dunne, P.; Elwood, A.; Futyan, D.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Nash, J.; Nikitenko, A.; Pela, J.; Penning, B.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Jesus, O.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Spencer, E.; Syarif, R.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Squires, M.; Stolp, D.; Tos, K.; Tripathi, M.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Weber, M.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Holzner, A.; Klein, D.; Krutelyov, V.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mullin, S. D.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bendavid, J.; Bornheim, A.; Bunn, J.; Duarte, J.; Lawhorn, J. M.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Winn, D.; Abdullin, S.; Albrow, M.; Apollinari, G.; Apresyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Cremonesi, M.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Wu, Y.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Rank, D.; Shchutska, L.; Sperka, D.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Bein, S.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Perry, T.; Prosper, H.; Santra, A.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Trauger, H.; Varelas, N.; Wang, H.; Wu, Z.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Forthomme, L.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Apyan, A.; Azzolini, V.; Barbieri, R.; Baty, A.; Bi, R.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Krajczar, K.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Malta Rodrigues, A.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Rupprecht, N.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Lange, D.; Luo, J.; Marlow, D.; Medvedeva, T.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Svyatkovskiy, A.; Tully, C.; Malik, S.; Barker, A.; Barnes, V. E.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Schulte, J. F.; Shi, X.; Sun, J.; Wang, F.; Xie, W.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. T.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Juska, E.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Sturdy, J.; Zaleski, S.; Belknap, D. A.; Buchanan, J.; Caillol, C.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Pierro, G. A.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.
2017-07-01
Normalized double-differential cross sections for top quark pair (t\\overline{t}) production are measured in pp collisions at a centre-of-mass energy of 8 {TeV} with the CMS experiment at the LHC. The analyzed data correspond to an integrated luminosity of 19.7 {fb}^{-1}. The measurement is performed in the dilepton e^{± }μ ^{∓ } final state. The t\\overline{t} cross section is determined as a function of various pairs of observables characterizing the kinematics of the top quark and t\\overline{t} system. The data are compared to calculations using perturbative quantum chromodynamics at next-to-leading and approximate next-to-next-to-leading orders. They are also compared to predictions of Monte Carlo event generators that complement fixed-order computations with parton showers, hadronization, and multiple-parton interactions. Overall agreement is observed with the predictions, which is improved when the latest global sets of proton parton distribution functions are used. The inclusion of the measured t\\overline{t} cross sections in a fit of parametrized parton distribution functions is shown to have significant impact on the gluon distribution.
Sirunyan, A M; Tumasyan, A; Adam, W; Asilar, E; Bergauer, T; Brandstetter, J; Brondolin, E; Dragicevic, M; Erö, J; Flechl, M; Friedl, M; Frühwirth, R; Ghete, V M; Hartl, C; Hörmann, N; Hrubec, J; Jeitler, M; König, A; Krätschmer, I; Liko, D; Matsushita, T; Mikulec, I; Rabady, D; Rad, N; Rahbaran, B; Rohringer, H; Schieck, J; Strauss, J; Waltenberger, W; Wulz, C-E; Dvornikov, O; Makarenko, V; Mossolov, V; Suarez Gonzalez, J; Zykunov, V; Shumeiko, N; Alderweireldt, S; De Wolf, E A; Janssen, X; Lauwers, J; Van De Klundert, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Abu Zeid, S; Blekman, F; D'Hondt, J; Daci, N; De Bruyn, I; Deroover, K; Lowette, S; Moortgat, S; Moreels, L; Olbrechts, A; Python, Q; Skovpen, K; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Parijs, I; Brun, H; Clerbaux, B; De Lentdecker, G; Delannoy, H; Fasanella, G; Favart, L; Goldouzian, R; Grebenyuk, A; Karapostoli, G; Lenzi, T; Léonard, A; Luetic, J; Maerschalk, T; Marinov, A; Randle-Conde, A; Seva, T; Vander Velde, C; Vanlaer, P; Vannerom, D; Yonamine, R; Zenoni, F; Zhang, F; Cornelis, T; Dobur, D; Fagot, A; Gul, M; Khvastunov, I; Poyraz, D; Salva, S; Schöfbeck, R; Tytgat, M; Van Driessche, W; Yazgan, E; Zaganidis, N; Bakhshiansohi, H; Bondu, O; Brochet, S; Bruno, G; Caudron, A; De Visscher, S; Delaere, C; Delcourt, M; Francois, B; Giammanco, A; Jafari, A; Komm, M; Krintiras, G; Lemaitre, V; Magitteri, A; Mertens, A; Musich, M; Piotrzkowski, K; Quertenmont, L; Selvaggi, M; Vidal Marono, M; Wertz, S; Beliy, N; Aldá Júnior, W L; Alves, F L; Alves, G A; Brito, L; Hensel, C; Moraes, A; Pol, M E; Rebello Teles, P; Chagas, E Belchior Batista Das; Carvalho, W; Chinellato, J; Custódio, A; Da Costa, E M; Da Silveira, G G; De Jesus Damiao, D; De Oliveira Martins, C; De Souza, S Fonseca; Guativa, L M Huertas; Malbouisson, H; Matos Figueiredo, D; Mora Herrera, C; Mundim, L; Nogima, H; Prado Da Silva, W L; Santoro, A; Sznajder, A; Tonelli Manganote, E J; Torres Da Silva De Araujo, F; Vilela Pereira, A; Ahuja, S; Bernardes, C A; Dogra, S; Fernandez Perez Tomei, T R; Gregores, E M; Mercadante, P G; Moon, C S; Novaes, S F; Padula, Sandra S; Romero Abad, D; Ruiz Vargas, J C; Aleksandrov, A; Hadjiiska, R; Iaydjiev, P; Rodozov, M; Stoykova, S; Sultanov, G; Vutova, M; Dimitrov, A; Glushkov, I; Litov, L; Pavlov, B; Petkov, P; Fang, W; Ahmad, M; Bian, J G; Chen, G M; Chen, H S; Chen, M; Chen, Y; Cheng, T; Jiang, C H; Leggat, D; Liu, Z; Romeo, F; Ruan, M; Shaheen, S M; Spiezia, A; Tao, J; Wang, C; Wang, Z; Zhang, H; Zhao, J; Ban, Y; Chen, G; Li, Q; Liu, S; Mao, Y; Qian, S J; Wang, D; Xu, Z; Avila, C; Cabrera, A; Chaparro Sierra, L F; Florez, C; Gomez, J P; González Hernández, C F; Ruiz Alvarez, J D; Sanabria, J C; Godinovic, N; Lelas, D; Puljak, I; Ribeiro Cipriano, P M; Sculac, T; Antunovic, Z; Kovac, M; Brigljevic, V; Ferencek, D; Kadija, K; Mesic, B; Susa, T; Ather, M W; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Rykaczewski, H; Finger, M; Finger, M; Carrera Jarrin, E; Ellithi Kamel, A; Mahmoud, M A; Radi, A; Kadastik, M; Perrini, L; Raidal, M; Tiko, A; Veelken, C; Eerola, P; Pekkanen, J; Voutilainen, M; Härkönen, J; Järvinen, T; Karimäki, V; Kinnunen, R; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Tuominiemi, J; Tuovinen, E; Wendland, L; Talvitie, J; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Favaro, C; Ferri, F; Ganjour, S; Ghosh, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Kucher, I; Locci, E; Machet, M; Malcles, J; Rander, J; Rosowsky, A; Titov, M; Abdulsalam, A; Antropov, I; Baffioni, S; Beaudette, F; Busson, P; Cadamuro, L; Chapon, E; Charlot, C; Davignon, O; Granier de Cassagnac, R; Jo, M; Lisniak, S; Miné, P; Nguyen, M; Ochando, C; Ortona, G; Paganini, P; Pigard, P; Regnard, S; Salerno, R; Sirois, Y; Stahl Leiton, A G; Strebler, T; Yilmaz, Y; Zabi, A; Zghiche, A; Agram, J-L; Andrea, J; Bloch, D; Brom, J-M; Buttignol, M; Chabert, E C; Chanon, N; Collard, C; Conte, E; Coubez, X; Fontaine, J-C; Gelé, D; Goerlach, U; Bihan, A-C Le; Van Hove, P; Gadrat, S; Beauceron, S; Bernet, C; Boudoul, G; Carrillo Montoya, C A; Chierici, R; Contardo, D; Courbon, B; Depasse, P; El Mamouni, H; Fay, J; Finco, L; Gascon, S; Gouzevitch, M; Grenier, G; Ille, B; Lagarde, F; Laktineh, I B; Lethuillier, M; Mirabito, L; Pequegnot, A L; Perries, S; Popov, A; Sordini, V; Vander Donckt, M; Verdier, P; Viret, S; Khvedelidze, A; Lomidze, D; Autermann, C; Beranek, S; Feld, L; Kiesel, M K; Klein, K; Lipinski, M; Preuten, M; Schomakers, C; Schulz, J; Verlage, T; Albert, A; Brodski, M; Dietz-Laursonn, E; Duchardt, D; Endres, M; Erdmann, M; Erdweg, S; Esch, T; Fischer, R; Güth, A; Hamer, M; Hebbeker, T; Heidemann, C; Hoepfner, K; Knutzen, S; Merschmeyer, M; Meyer, A; Millet, P; Mukherjee, S; Olschewski, M; Padeken, K; Pook, T; Radziej, M; Reithler, H; Rieger, M; Scheuch, F; Sonnenschein, L; Teyssier, D; Thüer, S; Cherepanov, V; Flügge, G; Kargoll, B; Kress, T; Künsken, A; Lingemann, J; Müller, T; Nehrkorn, A; Nowack, A; Pistone, C; Pooth, O; Stahl, A; Aldaya Martin, M; Arndt, T; Asawatangtrakuldee, C; Beernaert, K; Behnke, O; Behrens, U; Bin Anuar, A A; Borras, K; Campbell, A; Connor, P; Contreras-Campana, C; Costanza, F; Diez Pardos, C; Dolinska, G; Eckerlin, G; Eckstein, D; Eichhorn, T; Eren, E; Gallo, E; Garay Garcia, J; Geiser, A; Gizhko, A; Grados Luyando, J M; Grohsjean, A; Gunnellini, P; Harb, A; Hauk, J; Hempel, M; Jung, H; Kalogeropoulos, A; Karacheban, O; Kasemann, M; Keaveney, J; Kleinwort, C; Korol, I; Krücker, D; Lange, W; Lelek, A; Lenz, T; Leonard, J; Lipka, K; Lobanov, A; Lohmann, W; Mankel, R; Melzer-Pellmann, I-A; Meyer, A B; Mittag, G; Mnich, J; Mussgiller, A; Pitzl, D; Placakyte, R; Raspereza, A; Roland, B; Sahin, M Ö; Saxena, P; Schoerner-Sadenius, T; Spannagel, S; Stefaniuk, N; Van Onsem, G P; Walsh, R; Wissing, C; Zenaiev, O; Blobel, V; Centis Vignali, M; Draeger, A R; Dreyer, T; Garutti, E; Gonzalez, D; Haller, J; Hoffmann, M; Junkes, A; Klanner, R; Kogler, R; Kovalchuk, N; Kurz, S; Lapsien, T; Marchesini, I; Marconi, D; Meyer, M; Niedziela, M; Nowatschin, D; Pantaleo, F; Peiffer, T; Perieanu, A; Scharf, C; Schleper, P; Schmidt, A; Schumann, S; Schwandt, J; Sonneveld, J; Stadie, H; Steinbrück, G; Stober, F M; Stöver, M; Tholen, H; Troendle, D; Usai, E; Vanelderen, L; Vanhoefer, A; Vormwald, B; Akbiyik, M; Barth, C; Baur, S; Baus, C; Berger, J; Butz, E; Caspart, R; Chwalek, T; Colombo, F; De Boer, W; Dierlamm, A; Fink, S; Freund, B; Friese, R; Giffels, M; Gilbert, A; Goldenzweig, P; Haitz, D; Hartmann, F; Heindl, S M; Husemann, U; Kassel, F; Katkov, I; Kudella, S; Mildner, H; Mozer, M U; Müller, Th; Plagge, M; Quast, G; Rabbertz, K; Röcker, S; Roscher, F; Schröder, M; Shvetsov, I; Sieber, G; Simonis, H J; Ulrich, R; Wayand, S; Weber, M; Weiler, T; Williamson, S; Wöhrmann, C; Wolf, R; Anagnostou, G; Daskalakis, G; Geralis, T; Giakoumopoulou, V A; Kyriakis, A; Loukas, D; Topsis-Giotis, I; Kesisoglou, S; Panagiotou, A; Saoulidou, N; Tziaferi, E; Kousouris, K; Evangelou, I; Flouris, G; Foudas, C; Kokkas, P; Loukas, N; Manthos, N; Papadopoulos, I; Paradas, E; Filipovic, N; Pasztor, G; Bencze, G; Hajdu, C; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Zsigmond, A J; Beni, N; Czellar, S; Karancsi, J; Makovec, A; Molnar, J; Szillasi, Z; Bartók, M; Raics, P; Trocsanyi, Z L; Ujvari, B; Komaragiri, J R; Bahinipati, S; Bhowmik, S; Choudhury, S; Mal, P; Mandal, K; Nayak, A; Sahoo, D K; Sahoo, N; Swain, S K; Bansal, S; Beri, S B; Bhatnagar, V; Chawla, R; Bhawandeep, U; Kalsi, A K; Kaur, A; Kaur, M; Kumar, R; Kumari, P; Mehta, A; Mittal, M; Singh, J B; Walia, G; Kumar, Ashok; Bhardwaj, A; Choudhary, B C; Garg, R B; Keshri, S; Kumar, A; Malhotra, S; Naimuddin, M; Ranjan, K; Sharma, R; Sharma, V; Bhattacharya, R; Bhattacharya, S; Chatterjee, K; Dey, S; Dutt, S; Dutta, S; Ghosh, S; Majumdar, N; Modak, A; Mondal, K; Mukhopadhyay, S; Nandan, S; Purohit, A; Roy, A; Roy, D; Roy Chowdhury, S; Sarkar, S; Sharan, M; Thakur, S; Behera, P K; Chudasama, R; Dutta, D; Jha, V; Kumar, V; Mohanty, A K; Netrakanti, P K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Dugad, S; Kole, G; Mahakud, B; Mitra, S; Mohanty, G B; Parida, B; Sur, N; Sutar, B; Banerjee, S; Dewanjee, R K; Ganguly, S; Guchait, M; Jain, Sa; Kumar, S; Maity, M; Majumder, G; Mazumdar, K; Sarkar, T; Wickramage, N; Chauhan, S; Dube, S; Hegde, V; Kapoor, A; Kothekar, K; Pandey, S; Rane, A; Sharma, S; Chenarani, S; Eskandari Tadavani, E; Etesami, S M; Khakzad, M; Mohammadi Najafabadi, M; Naseri, M; Paktinat Mehdiabadi, S; Rezaei Hosseinabadi, F; Safarzadeh, B; Zeinali, M; Felcini, M; Grunewald, M; Abbrescia, M; Calabria, C; Caputo, C; Colaleo, A; Creanza, D; Cristella, L; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Maggi, M; Miniello, G; My, S; Nuzzo, S; Pompili, A; Pugliese, G; Radogna, R; Ranieri, A; Selvaggi, G; Sharma, A; Silvestris, L; Venditti, R; Verwilligen, P; Abbiendi, G; Battilana, C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Chhibra, S S; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G P; Tosi, N; Albergo, S; Costa, S; Di Mattia, A; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Lenzi, P; Meschini, M; Paoletti, S; Russo, L; Sguazzoni, G; Strom, D; Viliani, L; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Primavera, F; Calvelli, V; Ferro, F; Monge, M R; Robutti, E; Tosi, S; Brianza, L; Brivio, F; Ciriolo, V; Dinardo, M E; Fiorendi, S; Gennai, S; Ghezzi, A; Govoni, P; Malberti, M; Malvezzi, S; Manzoni, R A; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Pigazzini, S; Ragazzi, S; Tabarelli de Fatis, T; Buontempo, S; Cavallo, N; De Nardo, G; Di Guida, S; Esposito, M; Fabozzi, F; Fienga, F; Iorio, A O M; Lanza, G; Lista, L; Meola, S; Paolucci, P; Sciacca, C; Thyssen, F; Azzi, P; Bacchetta, N; Benato, L; Bisello, D; Boletti, A; Carlin, R; Antunes De Oliveira, A Carvalho; Checchia, P; Dall'Osso, M; De Castro Manzano, P; Dorigo, T; Dosselli, U; Gasparini, U; Gonella, F; Lacaprara, S; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Rossin, R; Simonetto, F; Torassa, E; Ventura, S; Zanetti, M; Zotto, P; Braghieri, A; Fallavollita, F; Magnani, A; Montagna, P; Ratti, S P; Re, V; Ressegotti, M; Riccardi, C; Salvini, P; Vai, I; Vitulo, P; Alunni Solestizi, L; Bilei, G M; Ciangottini, D; Fanò, L; Lariccia, P; Leonardi, R; Mantovani, G; Mariani, V; Menichelli, M; Saha, A; Santocchia, A; Androsov, K; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Castaldi, R; Ciocci, M A; Dell'Orso, R; Fedi, G; Giassi, A; Grippo, M T; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Palla, F; Rizzi, A; Savoy-Navarro, A; Spagnolo, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; Cipriani, M; Del Re, D; Diemoz, M; Gelli, S; Longo, E; Margaroli, F; Marzocchi, B; Meridiani, P; Organtini, G; Paramatti, R; Preiato, F; Rahatlou, S; Rovelli, C; Santanastasio, F; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bartosik, N; Bellan, R; Biino, C; Cartiglia, N; Cenna, F; Costa, M; Covarelli, R; Degano, A; Demaria, N; Kiani, B; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Monteil, E; Monteno, M; Obertino, M M; Pacher, L; Pastrone, N; Pelliccioni, M; Pinna Angioni, G L; Ravera, F; Romero, A; Ruspa, M; Sacchi, R; Shchelina, K; Sola, V; Solano, A; Staiano, A; Traczyk, P; Belforte, S; Casarsa, M; Cossutti, F; Della Ricca, G; Zanetti, A; Kim, D H; Kim, G N; Kim, M S; Lee, J; Lee, S; Lee, S W; Oh, Y D; Sekmen, S; Son, D C; Yang, Y C; Lee, A; Kim, H; Brochero Cifuentes, J A; Kim, T J; Cho, S; Choi, S; Go, Y; Gyun, D; Ha, S; Hong, B; Jo, Y; Kim, Y; Lee, K; Lee, K S; Lee, S; Lim, J; Park, S K; Roh, Y; Almond, J; Kim, J; Lee, H; Oh, S B; Radburn-Smith, B C; Seo, S H; Yang, U K; Yoo, H D; Yu, G B; Choi, M; Kim, H; Kim, J H; Lee, J S H; Park, I C; Ryu, G; Ryu, M S; Choi, Y; Goh, J; Hwang, C; Lee, J; Yu, I; Dudenas, V; Juodagalvis, A; Vaitkus, J; Ahmed, I; Ibrahim, Z A; Md Ali, M A B; Mohamad Idris, F; Wan Abdullah, W A T; Yusli, M N; Zolkapli, Z; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-De La Cruz, I; Lopez-Fernandez, R; Magaña Villalba, R; Mejia Guisao, J; Sanchez-Hernandez, A; Carrillo Moreno, S; Oropeza Barrera, C; Vazquez Valencia, F; Carpinteyro, S; Pedraza, I; Salazar Ibarguen, H A; Uribe Estrada, C; Morelos Pineda, A; Krofcheck, D; Butler, P H; Ahmad, A; Ahmad, M; Hassan, Q; Hoorani, H R; Khan, W A; Saddique, A; Shah, M A; Shoaib, M; Waqas, M; Bialkowska, H; Bluj, M; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Zalewski, P; Bunkowski, K; Byszuk, A; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Olszewski, M; Pyskir, A; Walczak, M; Bargassa, P; Beirão Da Cruz E Silva, C; Calpas, B; Di Francesco, A; Faccioli, P; Gallinaro, M; Hollar, J; Leonardo, N; Lloret Iglesias, L; Nemallapudi, M V; Seixas, J; Toldaiev, O; Vadruccio, D; Varela, J; Afanasiev, S; Bunin, P; Gavrilenko, M; Golutvin, I; Gorbunov, I; Kamenev, A; Karjavin, V; Lanev, A; Malakhov, A; Matveev, V; Palichik, V; Perelygin, V; Shmatov, S; Shulha, S; Skatchkov, N; Smirnov, V; Voytishin, N; Zarubin, A; Chtchipounov, L; Golovtsov, V; Ivanov, Y; Kim, V; Kuznetsova, E; Murzin, V; Oreshkin, V; Sulimov, V; Vorobyev, A; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Karneyeu, A; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Gavrilov, V; Lychkovskaya, N; Popov, V; Pozdnyakov, I; Safronov, G; Spiridonov, A; Toms, M; Vlasov, E; Zhokin, A; Aushev, T; Bylinkin, A; Danilov, M; Popova, E; Rusinov, V; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Terkulov, A; Baskakov, A; Belyaev, A; Boos, E; Bunichev, V; Dubinin, M; Dudko, L; Ershov, A; Klyukhin, V; Korneeva, N; Lokhtin, I; Miagkov, I; Obraztsov, S; Perfilov, M; Savrin, V; Volkov, P; Blinov, V; Skovpen, Y; Shtol, D; Azhgirey, I; Bayshev, I; Bitioukov, S; Elumakhov, D; Kachanov, V; Kalinin, A; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Cirkovic, P; Devetak, D; Dordevic, M; Milosevic, J; Rekovic, V; Alcaraz Maestre, J; Barrio Luna, M; Calvo, E; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Escalante Del Valle, A; Fernandez Bedoya, C; Fernández Ramos, J P; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Navarro De Martino, E; Pérez-Calero Yzquierdo, A; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Soares, M S; de Trocóniz, J F; Missiroli, M; Moran, D; Cuevas, J; Erice, C; Fernandez Menendez, J; Gonzalez Caballero, I; González Fernández, J R; Palencia Cortezon, E; Sanchez Cruz, S; Suárez Andrés, I; Vischia, P; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Curras, E; Fernandez, M; Garcia-Ferrero, J; Gomez, G; Lopez Virto, A; Marco, J; Martinez Rivero, C; Matorras, F; Piedra Gomez, J; Rodrigo, T; Ruiz-Jimeno, A; Scodellaro, L; Trevisani, N; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Baillon, P; Ball, A H; Barney, D; Bloch, P; Bocci, A; Botta, C; Camporesi, T; Castello, R; Cepeda, M; Cerminara, G; Chen, Y; Cimmino, A; d'Enterria, D; Dabrowski, A; Daponte, V; David, A; De Gruttola, M; De Roeck, A; Di Marco, E; Dobson, M; Dorney, B; du Pree, T; Duggan, D; Dünser, M; Dupont, N; Elliott-Peisert, A; Everaerts, P; Fartoukh, S; Franzoni, G; Fulcher, J; Funk, W; Gigi, D; Gill, K; Girone, M; Glege, F; Gulhan, D; Gundacker, S; Guthoff, M; Harris, P; Hegeman, J; Innocente, V; Janot, P; Kieseler, J; Kirschenmann, H; Knünz, V; Kornmayer, A; Kortelainen, M J; Krammer, M; Lange, C; Lecoq, P; Lourenço, C; Lucchini, M T; Malgeri, L; Mannelli, M; Martelli, A; Meijers, F; Merlin, J A; Mersi, S; Meschi, E; Milenovic, P; Moortgat, F; Morovic, S; Mulders, M; Neugebauer, H; Orfanelli, S; Orsini, L; Pape, L; Perez, E; Peruzzi, M; Petrilli, A; Petrucciani, G; Pfeiffer, A; Pierini, M; Racz, A; Reis, T; Rolandi, G; Rovere, M; Sakulin, H; Sauvan, J B; Schäfer, C; Schwick, C; Seidel, M; Sharma, A; Silva, P; Sphicas, P; Steggemann, J; Stoye, M; Takahashi, Y; Tosi, M; Treille, D; Triossi, A; Tsirou, A; Veckalns, V; Veres, G I; Verweij, M; Wardle, N; Wöhri, H K; Zagozdzinska, A; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Horisberger, R; Ingram, Q; Kaestli, H C; Kotlinski, D; Langenegger, U; Rohe, T; Wiederkehr, S A; Bachmair, F; Bäni, L; Bianchini, L; Casal, B; Dissertori, G; Dittmar, M; Donegà, M; Grab, C; Heidegger, C; Hits, D; Hoss, J; Kasieczka, G; Lustermann, W; Mangano, B; Marionneau, M; Martinez Ruiz Del Arbol, P; Masciovecchio, M; Meinhard, M T; Meister, D; Micheli, F; Musella, P; Nessi-Tedaldi, F; Pandolfi, F; Pata, J; Pauss, F; Perrin, G; Perrozzi, L; Quittnat, M; Rossini, M; Schönenberger, M; Starodumov, A; Tavolaro, V R; Theofilatos, K; Wallny, R; Aarrestad, T K; Amsler, C; Caminada, L; Canelli, M F; De Cosa, A; Donato, S; Galloni, C; Hinzmann, A; Hreus, T; Kilminster, B; Ngadiuba, J; Pinna, D; Rauco, G; Robmann, P; Salerno, D; Seitz, C; Yang, Y; Zucchetta, A; Candelise, V; Doan, T H; Jain, Sh; Khurana, R; Konyushikhin, M; Kuo, C M; Lin, W; Pozdnyakov, A; Yu, S S; Kumar, Arun; Chang, P; Chang, Y H; Chao, Y; Chen, K F; Chen, P H; Fiori, F; Hou, W-S; Hsiung, Y; Liu, Y F; Lu, R-S; Miñano Moya, M; Paganis, E; Psallidas, A; Tsai, J F; Asavapibhop, B; Singh, G; Srimanobhas, N; Suwonjandee, N; Adiguzel, A; Boran, F; Cerci, S; Damarseckin, S; Demiroglu, Z S; Dozen, C; Dumanoglu, I; Girgis, S; Gokbulut, G; Guler, Y; Hos, I; Kangal, E E; Kara, O; Kiminsu, U; Oglakci, M; Onengut, G; Ozdemir, K; Sunar Cerci, D; Tali, B; Topakli, H; Turkcapar, S; Zorbakir, I S; Zorbilmez, C; Bilin, B; Bilmis, S; Isildak, B; Karapinar, G; Yalvac, M; Zeyrek, M; Gülmez, E; Kaya, M; Kaya, O; Yetkin, E A; Yetkin, T; Cakir, A; Cankocak, K; Sen, S; Grynyov, B; Levchuk, L; Sorokin, P; Aggleton, R; Ball, F; Beck, L; Brooke, J J; Burns, D; Clement, E; Cussans, D; Flacher, H; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Jacob, J; Kreczko, L; Lucas, C; Newbold, D M; Paramesvaran, S; Poll, A; Sakuma, T; Seif El Nasr-Storey, S; Smith, D; Smith, V J; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Calligaris, L; Cieri, D; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Olaiya, E; Petyt, D; Shepherd-Themistocleous, C H; Thea, A; Tomalin, I R; Williams, T; Baber, M; Bainbridge, R; Buchmuller, O; Bundock, A; Casasso, S; Citron, M; Colling, D; Corpe, L; Dauncey, P; Davies, G; De Wit, A; Della Negra, M; Di Maria, R; Dunne, P; Elwood, A; Futyan, D; Haddad, Y; Hall, G; Iles, G; James, T; Lane, R; Laner, C; Lyons, L; Magnan, A-M; Malik, S; Mastrolorenzo, L; Nash, J; Nikitenko, A; Pela, J; Penning, B; Pesaresi, M; Raymond, D M; Richards, A; Rose, A; Scott, E; Seez, C; Summers, S; Tapper, A; Uchida, K; Vazquez Acosta, M; Virdee, T; Wright, J; Zenz, S C; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Borzou, A; Call, K; Dittmann, J; Hatakeyama, K; Liu, H; Pastika, N; Bartek, R; Dominguez, A; Buccilli, A; Cooper, S I; Henderson, C; Rumerio, P; West, C; Arcaro, D; Avetisyan, A; Bose, T; Gastler, D; Rankin, D; Richardson, C; Rohlf, J; Sulak, L; Zou, D; Benelli, G; Cutts, D; Garabedian, A; Hakala, J; Heintz, U; Hogan, J M; Jesus, O; Kwok, K H M; Laird, E; Landsberg, G; Mao, Z; Narain, M; Piperov, S; Sagir, S; Spencer, E; Syarif, R; Breedon, R; Burns, D; Calderon De La Barca Sanchez, M; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Flores, C; Funk, G; Gardner, M; Ko, W; Lander, R; Mclean, C; Mulhearn, M; Pellett, D; Pilot, J; Shalhout, S; Shi, M; Smith, J; Squires, M; Stolp, D; Tos, K; Tripathi, M; Bachtis, M; Bravo, C; Cousins, R; Dasgupta, A; Florent, A; Hauser, J; Ignatenko, M; Mccoll, N; Saltzberg, D; Schnaible, C; Valuev, V; Weber, M; Bouvier, E; Burt, K; Clare, R; Ellison, J; Gary, J W; Ghiasi Shirazi, S M A; Hanson, G; Heilman, J; Jandir, P; Kennedy, E; Lacroix, F; Long, O R; Olmedo Negrete, M; Paneva, M I; Shrinivas, A; Si, W; Wei, H; Wimpenny, S; Yates, B R; Branson, J G; Cerati, G B; Cittolin, S; Derdzinski, M; Gerosa, R; Holzner, A; Klein, D; Krutelyov, V; Letts, J; Macneill, I; Olivito, D; Padhi, S; Pieri, M; Sani, M; Sharma, V; Simon, S; Tadel, M; Vartak, A; Wasserbaech, S; Welke, C; Wood, J; Würthwein, F; Yagil, A; Zevi Della Porta, G; Amin, N; Bhandari, R; Bradmiller-Feld, J; Campagnari, C; Dishaw, A; Dutta, V; Franco Sevilla, M; George, C; Golf, F; Gouskos, L; Gran, J; Heller, R; Incandela, J; Mullin, S D; Ovcharova, A; Qu, H; Richman, J; Stuart, D; Suarez, I; Yoo, J; Anderson, D; Bendavid, J; Bornheim, A; Bunn, J; Duarte, J; Lawhorn, J M; Mott, A; Newman, H B; Pena, C; Spiropulu, M; Vlimant, J R; Xie, S; Zhu, R Y; Andrews, M B; Ferguson, T; Paulini, M; Russ, J; Sun, M; Vogel, H; Vorobiev, I; Weinberg, M; Cumalat, J P; Ford, W T; Jensen, F; Johnson, A; Krohn, M; Leontsinis, S; Mulholland, T; Stenson, K; Wagner, S R; Alexander, J; Chaves, J; Chu, J; Dittmer, S; Mcdermott, K; Mirman, N; Patterson, J R; Rinkevicius, A; Ryd, A; Skinnari, L; Soffi, L; Tan, S M; Tao, Z; Thom, J; Tucker, J; Wittich, P; Zientek, M; Winn, D; Abdullin, S; Albrow, M; Apollinari, G; Apresyan, A; Banerjee, S; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bolla, G; Burkett, K; Butler, J N; Cheung, H W K; Chlebana, F; Cihangir, S; Cremonesi, M; Elvira, V D; Fisk, I; Freeman, J; Gottschalk, E; Gray, L; Green, D; Grünendahl, S; Gutsche, O; Hare, D; Harris, R M; Hasegawa, S; Hirschauer, J; Hu, Z; Jayatilaka, B; Jindariani, S; Johnson, M; Joshi, U; Klima, B; Kreis, B; Lammel, S; Linacre, J; Lincoln, D; Lipton, R; Liu, M; Liu, T; Lopes De Sá, R; Lykken, J; Maeshima, K; Magini, N; Marraffino, J M; Maruyama, S; Mason, D; McBride, P; Merkel, P; Mrenna, S; Nahn, S; O'Dell, V; Pedro, K; Prokofyev, O; Rakness, G; Ristori, L; Sexton-Kennedy, E; Soha, A; Spalding, W J; Spiegel, L; Stoynev, S; Strait, J; Strobbe, N; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vernieri, C; Verzocchi, M; Vidal, R; Wang, M; Weber, H A; Whitbeck, A; Wu, Y; Acosta, D; Avery, P; Bortignon, P; Bourilkov, D; Brinkerhoff, A; Carnes, A; Carver, M; Curry, D; Das, S; Field, R D; Furic, I K; Konigsberg, J; Korytov, A; Low, J F; Ma, P; Matchev, K; Mei, H; Mitselmakher, G; Rank, D; Shchutska, L; Sperka, D; Thomas, L; Wang, J; Wang, S; Yelton, J; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Ackert, A; Adams, T; Askew, A; Bein, S; Hagopian, S; Hagopian, V; Johnson, K F; Kolberg, T; Perry, T; Prosper, H; Santra, A; Yohay, R; Baarmand, M M; Bhopatkar, V; Colafranceschi, S; Hohlmann, M; Noonan, D; Roy, T; Yumiceva, F; Adams, M R; Apanasevich, L; Berry, D; Betts, R R; Cavanaugh, R; Chen, X; Evdokimov, O; Gerber, C E; Hangal, D A; Hofman, D J; Jung, K; Kamin, J; Sandoval Gonzalez, I D; Trauger, H; Varelas, N; Wang, H; Wu, Z; Zhang, J; Bilki, B; Clarida, W; Dilsiz, K; Durgut, S; Gandrajula, R P; Haytmyradov, M; Khristenko, V; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Ogul, H; Onel, Y; Ozok, F; Penzo, A; Snyder, C; Tiras, E; Wetzel, J; Yi, K; Blumenfeld, B; Cocoros, A; Eminizer, N; Fehling, D; Feng, L; Gritsan, A V; Maksimovic, P; Roskes, J; Sarica, U; Swartz, M; Xiao, M; You, C; Al-Bataineh, A; Baringer, P; Bean, A; Boren, S; Bowen, J; Castle, J; Forthomme, L; Khalil, S; Kropivnitskaya, A; Majumder, D; Mcbrayer, W; Murray, M; Sanders, S; Stringer, R; Tapia Takaki, J D; Wang, Q; Ivanov, A; Kaadze, K; Maravin, Y; Mohammadi, A; Saini, L K; Skhirtladze, N; Toda, S; Rebassoo, F; Wright, D; Anelli, C; Baden, A; Baron, O; Belloni, A; Calvert, B; Eno, S C; Ferraioli, C; Gomez, J A; Hadley, N J; Jabeen, S; Jeng, G Y; Kellogg, R G; Kunkle, J; Mignerey, A C; Ricci-Tam, F; Shin, Y H; Skuja, A; Tonjes, M B; Tonwar, S C; Abercrombie, D; Allen, B; Apyan, A; Azzolini, V; Barbieri, R; Baty, A; Bi, R; Bierwagen, K; Brandt, S; Busza, W; Cali, I A; D'Alfonso, M; Demiragli, Z; Gomez Ceballos, G; Goncharov, M; Hsu, D; Iiyama, Y; Innocenti, G M; Klute, M; Kovalskyi, D; Krajczar, K; Lai, Y S; Lee, Y-J; Levin, A; Luckey, P D; Maier, B; Marini, A C; Mcginn, C; Mironov, C; Narayanan, S; Niu, X; Paus, C; Roland, C; Roland, G; Salfeld-Nebgen, J; Stephans, G S F; Tatar, K; Velicanu, D; Wang, J; Wang, T W; Wyslouch, B; Benvenuti, A C; Chatterjee, R M; Evans, A; Hansen, P; Kalafut, S; Kao, S C; Kubota, Y; Lesko, Z; Mans, J; Nourbakhsh, S; Ruckstuhl, N; Rusack, R; Tambe, N; Turkewitz, J; Acosta, J G; Oliveros, S; Avdeeva, E; Bloom, K; Claes, D R; Fangmeier, C; Gonzalez Suarez, R; Kamalieddin, R; Kravchenko, I; Malta Rodrigues, A; Monroy, J; Siado, J E; Snow, G R; Stieger, B; Alyari, M; Dolen, J; Godshalk, A; Harrington, C; Iashvili, I; Kaisen, J; Nguyen, D; Parker, A; Rappoccio, S; Roozbahani, B; Alverson, G; Barberis, E; Hortiangtham, A; Massironi, A; Morse, D M; Nash, D; Orimoto, T; Teixeira De Lima, R; Trocino, D; Wang, R-J; Wood, D; Bhattacharya, S; Charaf, O; Hahn, K A; Mucia, N; Odell, N; Pollack, B; Schmitt, M H; Sung, K; Trovato, M; Velasco, M; Dev, N; Hildreth, M; Hurtado Anampa, K; Jessop, C; Karmgard, D J; Kellams, N; Lannon, K; Marinelli, N; Meng, F; Mueller, C; Musienko, Y; Planer, M; Reinsvold, A; Ruchti, R; Rupprecht, N; Smith, G; Taroni, S; Wayne, M; Wolf, M; Woodard, A; Alimena, J; Antonelli, L; Bylsma, B; Durkin, L S; Flowers, S; Francis, B; Hart, A; Hill, C; Ji, W; Liu, B; Luo, W; Puigh, D; Winer, B L; Wulsin, H W; Cooperstein, S; Driga, O; Elmer, P; Hardenbrook, J; Hebda, P; Lange, D; Luo, J; Marlow, D; Medvedeva, T; Mei, K; Ojalvo, I; Olsen, J; Palmer, C; Piroué, P; Stickland, D; Svyatkovskiy, A; Tully, C; Malik, S; Barker, A; Barnes, V E; Folgueras, S; Gutay, L; Jha, M K; Jones, M; Jung, A W; Khatiwada, A; Miller, D H; Neumeister, N; Schulte, J F; Shi, X; Sun, J; Wang, F; Xie, W; Parashar, N; Stupak, J; Adair, A; Akgun, B; Chen, Z; Ecklund, K M; Geurts, F J M; Guilbaud, M; Li, W; Michlin, B; Northup, M; Padley, B P; Roberts, J; Rorie, J; Tu, Z; Zabel, J; Betchart, B; Bodek, A; de Barbaro, P; Demina, R; Duh, Y T; Ferbel, T; Galanti, M; Garcia-Bellido, A; Han, J; Hindrichs, O; Khukhunaishvili, A; Lo, K H; Tan, P; Verzetti, M; Agapitos, A; Chou, J P; Gershtein, Y; Gómez Espinosa, T A; Halkiadakis, E; Heindl, M; Hughes, E; Kaplan, S; Kunnawalkam Elayavalli, R; Kyriacou, S; Lath, A; Montalvo, R; Nash, K; Osherson, M; Saka, H; Salur, S; Schnetzer, S; Sheffield, D; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Delannoy, A G; Foerster, M; Heideman, J; Riley, G; Rose, K; Spanier, S; Thapa, K; Bouhali, O; Celik, A; Dalchenko, M; De Mattia, M; Delgado, A; Dildick, S; Eusebi, R; Gilmore, J; Huang, T; Juska, E; Kamon, T; Mueller, R; Pakhotin, Y; Patel, R; Perloff, A; Perniè, L; Rathjens, D; Safonov, A; Tatarinov, A; Ulmer, K A; Akchurin, N; Damgov, J; De Guio, F; Dragoiu, C; Dudero, P R; Faulkner, J; Gurpinar, E; Kunori, S; Lamichhane, K; Lee, S W; Libeiro, T; Peltola, T; Undleeb, S; Volobouev, I; Wang, Z; Greene, S; Gurrola, A; Janjam, R; Johns, W; Maguire, C; Melo, A; Ni, H; Sheldon, P; Tuo, S; Velkovska, J; Xu, Q; Arenton, M W; Barria, P; Cox, B; Hirosky, R; Ledovskoy, A; Li, H; Neu, C; Sinthuprasith, T; Sun, X; Wang, Y; Wolfe, E; Xia, F; Clarke, C; Harr, R; Karchin, P E; Sturdy, J; Zaleski, S; Belknap, D A; Buchanan, J; Caillol, C; Dasu, S; Dodd, L; Duric, S; Gomber, B; Grothe, M; Herndon, M; Hervé, A; Hussain, U; Klabbers, P; Lanaro, A; Levine, A; Long, K; Loveless, R; Pierro, G A; Polese, G; Ruggles, T; Savin, A; Smith, N; Smith, W H; Taylor, D; Woods, N
2017-01-01
Normalized double-differential cross sections for top quark pair ([Formula: see text]) production are measured in pp collisions at a centre-of-mass energy of 8[Formula: see text] with the CMS experiment at the LHC. The analyzed data correspond to an integrated luminosity of 19.7[Formula: see text]. The measurement is performed in the dilepton [Formula: see text] final state. The [Formula: see text] cross section is determined as a function of various pairs of observables characterizing the kinematics of the top quark and [Formula: see text] system. The data are compared to calculations using perturbative quantum chromodynamics at next-to-leading and approximate next-to-next-to-leading orders. They are also compared to predictions of Monte Carlo event generators that complement fixed-order computations with parton showers, hadronization, and multiple-parton interactions. Overall agreement is observed with the predictions, which is improved when the latest global sets of proton parton distribution functions are used. The inclusion of the measured [Formula: see text] cross sections in a fit of parametrized parton distribution functions is shown to have significant impact on the gluon distribution.
Vlasov Treatment of Coherent Synchrotron Radiation from Arbitrary Planar Orbits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warnock, R
2004-09-22
We study the influence of coherent synchrotron radiation (CSR) on particle bunches traveling on arbitrary planar orbits between parallel conducting plates. The plates represent shielding due to the vacuum chamber. The vertical distribution of charge is an arbitrary fixed function. Our goal is to follow the time evolution of the phase space distribution by solving the Vlasov-Maxwell equations in the time domain. This provides simulations with lower numerical noise than the macroparticle method, and allows one to study such issues as emittance degradation and microbunching due to CSR in bunch compressors. The fields excited by the bunch are computed inmore » the laboratory frame from a new formula that leads to much simpler computations than the usual retarded potentials or Lienard-Wiechert potentials. The nonlinear Vlasov equation, formulated in the interaction picture, is integrated in the beam frame by approximating the Perron-Frobenius operator. The distribution function is represented by B-splines, in a scheme preserving positivity and normalization of the distribution. For application to a chicane bunch compressor we take steps to deal with energy chirp, an initial near-perfect correlation of energy with position in the bunch.« less
Choi, Yun Ho; Yoo, Sung Jin
2017-03-28
A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
ERIC Educational Resources Information Center
Meyer, J. Patrick; Seaman, Michael A.
2013-01-01
The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Stochastic Models for Laser Propagation in Atmospheric Turbulence.
NASA Astrophysics Data System (ADS)
Leland, Robert Patton
In this dissertation, stochastic models for laser propagation in atmospheric turbulence are considered. A review of the existing literature on laser propagation in the atmosphere and white noise theory is presented, with a view toward relating the white noise integral and Ito integral approaches. The laser beam intensity is considered as the solution to a random Schroedinger equation, or forward scattering equation. This model is formulated in a Hilbert space context as an abstract bilinear system with a multiplicative white noise input, as in the literature. The model is also modeled in the Banach space of Fresnel class functions to allow the plane wave case and the application of path integrals. Approximate solutions to the Schroedinger equation of the Trotter-Kato product form are shown to converge for each white noise sample path. The product forms are shown to be physical random variables, allowing an Ito integral representation. The corresponding Ito integrals are shown to converge in mean square, providing a white noise basis for the Stratonovich correction term associated with this equation. Product form solutions for Ornstein -Uhlenbeck process inputs were shown to converge in mean square as the input bandwidth was expanded. A digital simulation of laser propagation in strong turbulence was used to study properties of the beam. Empirical distributions for the irradiance function were estimated from simulated data, and the log-normal and Rice-Nakagami distributions predicted by the classical perturbation methods were seen to be inadequate. A gamma distribution fit the simulated irradiance distribution well in the vicinity of the boresight. Statistics of the beam were seen to converge rapidly as the bandwidth of an Ornstein-Uhlenbeck process was expanded to its white noise limit. Individual trajectories of the beam were presented to illustrate the distortion and bending of the beam due to turbulence. Feynman path integrals were used to calculate an approximate expression for the mean of the beam intensity without using the Markov, or white noise, assumption, and to relate local variations in the turbulence field to the behavior of the beam by means of two approximations.
Financial derivative pricing under probability operator via Esscher transfomation
NASA Astrophysics Data System (ADS)
Achi, Godswill U.
2014-10-01
The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing9 where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula using the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φx(u) of Xt we recuperate the Black-Scholes formula for financial derivative prices.
Analysis of life tables with grouping and withdrawals.
Lindley, D V
1979-09-01
A number of individuals is observed at the beginning of a period. At the end of the period the number is surviving, the number who have died and the number who have withdrawn are noted. From these three numbers it is required to estimate the death rate for the period. All relevant quantities are supposed independent and identically distributed for the individuals. The likelihood is calculated and found to depend on two parameters, other than the death rate, and to be unidenttifiable so that no consistent estimators exist. For large numbers, the posterior distribution of the death rate is approximated by a normal distribution whose mean is the root of a quadratic equation and whose variance is the sum of two terms; the first is proportional to the reciprocal of the number of individuals, as usually happens with a consistent estimator; the second does not tend to zero and depends on initial opinions about one of the nuisance parameters. The paper is a simple exercise in the routine use of coherent, Bayesian methodology. Numerical calucations illustrate the results.
Discrete geometric analysis of message passing algorithm on graphs
NASA Astrophysics Data System (ADS)
Watanabe, Yusuke
2010-04-01
We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.
Analytic saddlepoint approximation for ionization energy loss distributions
Sjue, Sky K. L.; George, Jr., Richard Neal; Mathews, David Gregory
2017-07-27
Here, we present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v/c < 1, provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal’s distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov’s κ approaches 1.
Analytic saddlepoint approximation for ionization energy loss distributions
NASA Astrophysics Data System (ADS)
Sjue, S. K. L.; George, R. N.; Mathews, D. G.
2017-09-01
We present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v / c < 1 , provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal's distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov's κ approaches 1.
Analytic saddlepoint approximation for ionization energy loss distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjue, Sky K. L.; George, Jr., Richard Neal; Mathews, David Gregory
Here, we present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v/c < 1, provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal’s distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov’s κ approaches 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu Yiping; Bolton, Adam S.; Dawson, Kyle S.
2012-04-15
We present a hierarchical Bayesian determination of the velocity-dispersion function of approximately 430,000 massive luminous red galaxies observed at relatively low spectroscopic signal-to-noise ratio (S/N {approx} 3-5 per 69 km s{sup -1}) by the Baryon Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey III. We marginalize over spectroscopic redshift errors, and use the full velocity-dispersion likelihood function for each galaxy to make a self-consistent determination of the velocity-dispersion distribution parameters as a function of absolute magnitude and redshift, correcting as well for the effects of broadband magnitude errors on our binning. Parameterizing the distribution at each point inmore » the luminosity-redshift plane with a log-normal form, we detect significant evolution in the width of the distribution toward higher intrinsic scatter at higher redshifts. Using a subset of deep re-observations of BOSS galaxies, we demonstrate that our distribution-parameter estimates are unbiased regardless of spectroscopic S/N. We also show through simulation that our method introduces no systematic parameter bias with redshift. We highlight the advantage of the hierarchical Bayesian method over frequentist 'stacking' of spectra, and illustrate how our measured distribution parameters can be adopted as informative priors for velocity-dispersion measurements from individual noisy spectra.« less
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Computer-assisted bladder cancer grading: α-shapes for color space decomposition
NASA Astrophysics Data System (ADS)
Niazi, M. K. K.; Parwani, Anil V.; Gurcan, Metin N.
2016-03-01
According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the "nuclear to cytoplasmic ratio" by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.
Modeling intracavitary heating of the uterus by means of a balloon catheter
NASA Astrophysics Data System (ADS)
Olsrud, Johan; Friberg, Britt; Rioseco, Juan; Ahlgren, Mats; Persson, Bertil R. R.
1999-01-01
Balloon thermal endometrial destruction (TED) is a recently developed method to treat heavy menstrual bleeding (menorrhagia). Numerical simulations of this treatment by use of the finite element method were performed. The mechanical deformation and the resulting stress distribution when a balloon catheter is expanded within the uterine cavity was estimated from structural analysis. Thermal analysis was then performed to estimate the depth of tissue coagulation (temperature > 55 degree(s)C) in the uterus during TED. The estimated depth of coagulation, after 30 min heating with an intracavity temperature of 75 degree(s)C, was approximately 9 mm when blood flow was disregarded. With uniform normal blood flow, the depth of coagulation decreased to 3 - 4 mm. Simulations with varying intracavity temperatures and blood flow rates showed that both parameters should be of major importance to the depth of coagulation. The influence of blood flow was less when the pressure due to the balloon was also considered (5 - 6 mm coagulation depth with normal blood flow).
Tavakoli, Naser; Minaiyan, Mohsen; Tabbakhian, Majid; Pendar, Yaqub
2014-01-01
Repaglinide, an oral antidiabetic agent, has a rapid onset of action and short half-life of approximately 1 h. Designing a controlled release dosage form of the drug is required to maintain its therapeutic blood level and to eliminate its adverse effects, particularly the hypoglycaemia. Repaglinide sustained release matrix pellets consisting of Avicel, lactose and different polymers were prepared using extrusion-spheronisation method. The effect of different formulation components on in vitro drug release were evaluated using USP apparatus (paddle) for 12 h in phosphate buffer. The optimised formulation was orally administrated to normal and STZ induced diabetic rats. Most pellet formulations had acceptable physical properties with regard to size distribution, flowability and friability. Repaglinide pellets comprising Avicel 50%, lactose 47% and SLS 1% were released 94% of its drug content after 12 h. The optimised formulation was able to decrease blood glucose level in normal rats and those with diabetes throughout 8-12 h.
McKone, Elinor; Wan, Lulu; Robbins, Rachel; Crookes, Kate; Liu, Jia
2017-07-01
The Cambridge Face Memory Test (CFMT) is widely accepted as providing a valid and reliable tool in diagnosing prosopagnosia (inability to recognize people's faces). Previously, large-sample norms have been available only for Caucasian-face versions, suitable for diagnosis in Caucasian observers. These are invalid for observers of different races due to potentially severe other-race effects. Here, we provide large-sample norms (N = 306) for East Asian observers on an Asian-face version (CFMT-Chinese). We also demonstrate methodological suitability of the CFMT-Chinese for prosopagnosia diagnosis (high internal reliability, approximately normal distribution, norm-score range sufficiently far above chance). Additional findings were a female advantage on mean performance, plus a difference between participants living in the East (China) or the West (international students, second-generation children of immigrants), which we suggest might reflect personality differences associated with willingness to emigrate. Finally, we demonstrate suitability of the CFMT-Chinese for individual differences studies that use correlations within the normal range.
Normalization and Implementation of Three Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.
2016-01-01
Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
2009-11-01
is estimated using the Gaussian kernel function: c′(w, i) = N∑ j =1 c(w, j ) exp [−(i− j )2 2σ2 ] (2) where i and j are absolute positions of the...corresponding terms in the document, and N is the length of the document; c(w, j ) is the actual count of term w at position j . The PLM P (·|D, i) needs to...probability of rel- evance well. The distribution of relevance can be approximated as fol- lows: p(i|θrel) = ∑ j δ(Qj , i)∑ i ∑ j δ(Qj , i) (10
Optimal feedback control of turbulent channel flow
NASA Technical Reports Server (NTRS)
Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz
1993-01-01
Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.
Aquaporin-0 Targets Interlocking Domains to Control the Integrity and Transparency of the Eye Lens
Lo, Woo-Kuen; Biswas, Sondip K.; Brako, Lawrence; Shiels, Alan; Gu, Sumin; Jiang, Jean X.
2014-01-01
Purpose. Lens fiber cell membranes contain aquaporin-0 (AQP0), which constitutes approximately 50% of the total fiber cell membrane proteins and has a dual function as a water channel protein and an adhesion molecule. Fiber cell membranes also develop an elaborate interlocking system that is required for maintaining structural order, stability, and lens transparency. Herein, we used an AQP0-deficient mouse model to investigate an unconventional adhesion role of AQP0 in maintaining a normal structure of lens interlocking protrusions. Methods. The loss of AQP0 in AQP0−/− lens fibers was verified by Western blot and immunofluorescence analyses. Changes in membrane surface structures of wild-type and AQP0−/− lenses at age 3 to 12 weeks were examined with scanning electron microscopy. Preferential distribution of AQP0 in wild-type fiber cell membranes was analyzed with immunofluorescence and immunogold labeling using freeze-fracturing transmission electron microscopy. Results. Interlocking protrusions in young differentiating fiber cells developed normally but showed minor abnormalities at approximately 50 μm deep in the absence of AQP0 in all ages studied. Strikingly, protrusions in maturing fiber cells specifically underwent uncontrolled elongation, deformation, and fragmentation, while cells still retained their overall shape. Later in the process, these changes eventually resulted in fiber cell separation, breakdown, and cataract formation in the lens core. Immunolabeling at the light microscopy and transmission electron microscopy levels demonstrated that AQP0 was particularly enriched in interlocking protrusions in wild-type lenses. Conclusions. This study suggests that AQP0 exerts its primary adhesion or suppression role specifically to maintain the normal structure of interlocking protrusions that is critical to the integrity and transparency of the lens. PMID:24458158
Modeling the Redshift Evolution of the Normal Galaxy X-Ray Luminosity Function
NASA Technical Reports Server (NTRS)
Tremmel, M.; Fragos, T.; Lehmer, B. D.; Tzanavaris, P.; Belczynski, K.; Kalogera, V.; Basu-Zych, A. R.; Farr, W. M.; Hornschemeier, A.; Jenkins, L.;
2013-01-01
Emission from X-ray binaries (XRBs) is a major component of the total X-ray luminosity of normal galaxies, so X-ray studies of high-redshift galaxies allow us to probe the formation and evolution of XRBs on very long timescales (approximately 10 Gyr). In this paper, we present results from large-scale population synthesis models of binary populations in galaxies from z = 0 to approximately 20. We use as input into our modeling the Millennium II Cosmological Simulation and the updated semi-analytic galaxy catalog by Guo et al. to self-consistently account for the star formation history (SFH) and metallicity evolution of each galaxy. We run a grid of 192 models, varying all the parameters known from previous studies to affect the evolution of XRBs. We use our models and observationally derived prescriptions for hot gas emission to create theoretical galaxy X-ray luminosity functions (XLFs) for several redshift bins. Models with low common envelope efficiencies, a 50% twins mass ratio distribution, a steeper initial mass function exponent, and high stellar wind mass-loss rates best match observational results from Tzanavaris & Georgantopoulos, though they significantly underproduce bright early-type and very bright (L(sub x) greater than 10(exp 41)) late-type galaxies. These discrepancies are likely caused by uncertainties in hot gas emission and SFHs, active galactic nucleus contamination, and a lack of dynamically formed low-mass XRBs. In our highest likelihood models, we find that hot gas emission dominates the emission for most bright galaxies. We also find that the evolution of the normal galaxy X-ray luminosity density out to z = 4 is driven largely by XRBs in galaxies with X-ray luminosities between 10(exp 40) and 10(exp 41) erg s(exp -1).
NASA Technical Reports Server (NTRS)
Cole, G. L.; Willoh, R. G.
1975-01-01
A linearized mathematical analysis is presented for determining the response of normal shock position and subsonic duct pressures to flow-field perturbations upstream of the normal shock in mixed-compression supersonic inlets. The inlet duct cross-sectional area variation is approximated by constant-area sections; this approximation results in one-dimensional wave equations. A movable normal shock separates the supersonic and subsonic flow regions, and a choked exit is assumed for the inlet exit condition. The analysis leads to a closed-form matrix solution for the shock position and pressure transfer functions. Analytical frequency response results are compared with experimental data and a method of characteristics solution.
NASA Technical Reports Server (NTRS)
Runckel, Jack F.; Hieser, Gerald
1961-01-01
An investigation has been conducted at the Langley 16-foot transonic tunnel to determine the loading characteristics of flap-type ailerons located at inboard, midspan, and outboard positions on a 45 deg. sweptback-wing-body combination. Aileron normal-force and hinge-moment data have been obtained at Mach numbers from 0.80 t o 1.03, at angles of attack up to about 27 deg., and at aileron deflections between approximately -15 deg. and 15 deg. Results of the investigation indicate that the loading over the ailerons was established by the wing-flow characteristics, and the loading shapes were irregular in the transonic speed range. The spanwise location of the aileron had little effect on the values of the slope of the curves of hinge-moment coefficient against aileron deflection, but the inboard aileron had the greatest value of the slope of the curves of hinge-moment coefficient against angle of attack and the outboard aileron had the least. Hinge-moment and aileron normal-force data taken with strain-gage instrumentation are compared with data obtained with pressure measurements.
Time-evolution of uniform momentum zones in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Laskari, Angeliki; Hearst, R. Jason; de Kat, Roeland; Ganapathisubramani, Bharathram
2016-11-01
Time-resolved planar particle image velocimetry (PIV) is used to analyse the organisation and evolution of uniform momentum zones (UMZs) in a turbulent boundary layer. Experiments were performed in a recirculating water tunnel on a streamwise-wall-normal plane extending approximately 0 . 5 δ × 1 . 8 δ , in x and y, respectively. In total 400,000 images were captured and for each of the resulting velocity fields, local peaks in the probability density distribution of the streamwise velocity were detected, indicating the instantaneous presence of UMZs throughout the boundary layer. The main characteristics of these zones are outlined and more specifically their velocity range and wall-normal extent. The variation of these characteristics with wall normal distance and total number of zones are also discussed. Exploiting the time information available, time-scales of zones that have a substantial coherence in time are analysed and results show that the zones' lifetime is dependent on both their momentum deficit level and the total number of zones present. Conditional averaging of the flow statistics seems to further indicate that a large number of zones is the result of a wall-dominant mechanism, while the opposite implies an outer-layer dominance.
Spectroscopic characterization of collagen cross-links in bone
NASA Technical Reports Server (NTRS)
Paschalis, E. P.; Verdelis, K.; Doty, S. B.; Boskey, A. L.; Mendelsohn, R.; Yamauchi, M.
2001-01-01
Collagen is the most abundant protein of the organic matrix in mineralizing tissues. One of its most critical properties is its cross-linking pattern. The intermolecular cross-linking provides the fibrillar matrices with mechanical properties such as tensile strength and viscoelasticity. In this study, Fourier transform infrared (FTIR) spectroscopy and FTIR imaging (FTIRI) analyses were performed in a series of biochemically characterized samples including purified collagen cross-linked peptides, demineralized bovine bone collagen from animals of different ages, collagen from vitamin B6-deficient chick homogenized bone and their age- and sex-matched controls, and histologically stained thin sections from normal human iliac crest biopsy specimens. One region of the FTIR spectrum of particular interest (the amide I spectral region) was resolved into its underlying components. Of these components, the relative percent area ratio of two subbands at approximately 1660 cm(-1) and approximately 1690 cm(-1) was related to collagen cross-links that are abundant in mineralized tissues (i.e., pyridinoline [Pyr] and dehydrodihydroxylysinonorleucine [deH-DHLNL]). This study shows that it is feasible to monitor Pyr and DHLNL collagen cross-links spatial distribution in mineralized tissues. The spectroscopic parameter established in this study may be used in FTIRI analyses, thus enabling the calculation of relative Pyr/DHLNL amounts in thin (approximately 5 microm) calcified tissue sections with a spatial resolution of approximately 7 microm.
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Qayum, Naseer; Im, Jaehong; Stratford, Michael R; Bernhard, Eric J; McKenna, W Gillies; Muschel, Ruth J
2012-01-01
Because effective drug delivery is often limited by inadequate vasculature within the tumor, the ability to modulate the tumor microenvironment is one strategy that may achieve better drug distribution. We have previously shown that treatment of mice bearing tumors with phosphoinositide-3 kinase (PI3K) inhibitors alters vascular structure in a manner analogous to vascular normalization and results in increased perfusion of the tumor. On the basis of that result, we asked whether inhibition of PI3K would improve chemotherapy delivery. Mice with xenografts using the cell line SQ20B bearing a hypoxia marker or MMTV-neu transgenic mice with spontaneous breast tumors were treated with the class I PI3K inhibitor GDC-0941. The tumor vasculature was evaluated by Doppler ultrasound, and histology. The delivery of doxorubicin was assessed using whole animal fluorescence, distribution on histologic sections, high-performance liquid chromatography on tumor lysates, and tumor growth delay. Treatment with GDC-0941 led to approximately three-fold increases in perfusion, substantially reduced hypoxia and vascular normalization by histology. Significantly increased amounts of doxorubicin were delivered to the tumors correlating with synergistic tumor growth delay. The GDC-0941 itself had no effect on tumor growth. Inhibition of PI3K led to vascular normalization and improved delivery of a chemotherapeutic agent. This study highlights the importance of the microvascular effects of some novel oncogenic signaling inhibitors and the need to take those changes into account in the design of clinical trials many of which use combinations of chemotherapeutic agents. © 2011 AACR.
Soulis, Johannes V; Fytanidis, Dimitrios K; Lampri, Olga P; Giannoglou, George D
2016-04-01
The temporal variation of the hemodynamic mechanical parameters during cardiac pulse wave is considered as an important atherogenic factor. Applying non-Newtonian blood molecular viscosity simulation is crucial for hemodynamic analysis. Understanding low density lipoprotein (LDL) distribution in relation to flow parameters will possibly spot the prone to atherosclerosis aorta regions. The biomechanical parameters tested were averaged wall shear stress (AWSS), oscillatory shear index (OSI) and relative residence time (RRT) in relation to the LDL concentration. Four non-Newtonian molecular viscosity models and the Newtonian one were tested for the normal human aorta under oscillating flow. The analysis was performed via computational fluid dynamic. Tested viscosity blood flow models for the biomechanical parameters yield a consistent aorta pattern. High OSI and low AWSS develop at the concave aorta regions. This is most noticeable in downstream flow region of the left subclavian artery and at concave ascending aorta. Concave aorta regions exhibit high RRT and elevated LDL. For the concave aorta site, the peak LDL value is 35.0% higher than its entrance value. For the convex site, it is 18.0%. High LDL endothelium regions located at the aorta concave site are well predicted with high RRT. We are in favor of using the non-Newtonian power law model for analysis. It satisfactorily approximates the molecular viscosity, WSS, OSI, RRT and LDL distribution. Concave regions are mostly prone to atherosclerosis. The flow biomechanical factor RRT is a relatively useful tool for identifying the localization of the atheromatic plaques of the normal human aorta.
ALTERED PHALANX FORCE DIRECTION DURING POWER GRIP FOLLOWING STROKE
Enders, Leah R.
2015-01-01
Many stroke survivors with severe impairment can grasp only with a power grip. Yet, little knowledge is available on altered power grip after stroke, other than reduced power grip strength. This study characterized stroke survivors’ static power grip during 100% and 50% maximum grip. Each phalanx force’s angular deviation from the normal direction and its contribution to total normal force was compared for 11 stroke survivors and 11 age-matched controls. Muscle activities and skin coefficient of friction (COF) were additionally compared for another 20 stroke and 13 age-matched control subjects. The main finding was that stroke survivors gripped with a 34% greater phalanx force angular deviation of 19±2° compared to controls of 14±1° (p<.05). Stroke survivors’ phalanx force angular deviation was closer to the 23° threshold of slippage between the phalanx and grip surface, which may explain increased likelihood of object dropping in stroke survivors. In addition, this altered phalanx force direction decreases normal grip force by tilting the force vector, indicating a partial role of phalanx force angular deviation in reduced grip strength post stroke. Greater phalanx force angular deviation may biomechanically result from more severe underactivation of stroke survivors’ first dorsal interosseous (FDI) and extensor digitorum communis (EDC) muscles compared to their flexor digitorum superficialis (FDS) or somatosensory deficit. While stroke survivors’ maximum power grip strength was approximately half of the controls’, the distribution of their remaining strength over the fingers and phalanges did not differ, indicating evenly distributed grip force reduction over the entire hand. PMID:25795079
Failure Time Distributions: Estimates and Asymptotic Results.
1980-01-01
of the models. A parametric family of distributions is proposed for approximating life distri- butions whose hazard rate is bath-tub shaped, this...of the limiting dirtributions of the models. A parametric family of distributions is proposed for approximating life distribution~s whose hazard rate...12. always justified. But, because of this gener- ality, the possible limit laws for the maximum form a very large family . The
Beneficial effects of voluntary wheel running on the properties of dystrophic mouse muscle.
Hayes, A; Williams, D A
1996-02-01
Effects of voluntary exercise on the isometric contractile, fatigue, and histochemical properties of hindlimb dystrophic (mdx and 129ReJ dy/dy) skeletal muscles were investigated. Mice were allowed free access to a voluntary running wheel at 4 wk of age for a duration of 16 (mdx) or 5 (dy/dy) wk. Running performance of mdx mice (approximately 4 km/day at 1.6 km/h) was inferior to normal mice (approximately 6.5 km/day at 2.1 km/h). However, exercise improved the force output (approximately 15%) and the fatigue resistance of both C57BL/10 and mdx soleus muscles. These changes coincided with increased proportions of smaller type I fibers and decreased proportions of larger type IIa fibers in the mdx soleus. The extensor digitorum longus of mdx, but not of normal, mice also exhibited improved resistance to fatigue and conversion towards oxidative fiber types. The dy/dy animals were capable of exercising, yet ran significantly less than normal animals (approximately 0.5 km/day). Despite this, running increased the force output of the plantaris muscle (approximately 50%). Taken together, the results showed that exercise can have beneficial effects on dystrophic skeletal muscles.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Cox, Trevor F; Czanner, Gabriela
2016-06-30
This paper introduces a new simple divergence measure between two survival distributions. For two groups of patients, the divergence measure between their associated survival distributions is based on the integral of the absolute difference in probabilities that a patient from one group dies at time t and a patient from the other group survives beyond time t and vice versa. In the case of non-crossing hazard functions, the divergence measure is closely linked to the Harrell concordance index, C, the Mann-Whitney test statistic and the area under a receiver operating characteristic curve. The measure can be used in a dynamic way where the divergence between two survival distributions from time zero up to time t is calculated enabling real-time monitoring of treatment differences. The divergence can be found for theoretical survival distributions or can be estimated non-parametrically from survival data using Kaplan-Meier estimates of the survivor functions. The estimator of the divergence is shown to be generally unbiased and approximately normally distributed. For the case of proportional hazards, the constituent parts of the divergence measure can be used to assess the proportional hazards assumption. The use of the divergence measure is illustrated on the survival of pancreatic cancer patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders
2013-09-07
The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
A theory for modeling ground-water flow in heterogeneous media
Cooley, Richard L.
2004-01-01
Construction of a ground-water model for a field area is not a straightforward process. Data are virtually never complete or detailed enough to allow substitution into the model equations and direct computation of the results of interest. Formal model calibration through optimization, statistical, and geostatistical methods is being applied to an increasing extent to deal with this problem and provide for quantitative evaluation and uncertainty analysis of the model. However, these approaches are hampered by two pervasive problems: 1) nonlinearity of the solution of the model equations with respect to some of the model (or hydrogeologic) input variables (termed in this report system characteristics) and 2) detailed and generally unknown spatial variability (heterogeneity) of some of the system characteristics such as log hydraulic conductivity, specific storage, recharge and discharge, and boundary conditions. A theory is developed in this report to address these problems. The theory allows construction and analysis of a ground-water model of flow (and, by extension, transport) in heterogeneous media using a small number of lumped or smoothed system characteristics (termed parameters). The theory fully addresses both nonlinearity and heterogeneity in such a way that the parameters are not assumed to be effective values. The ground-water flow system is assumed to be adequately characterized by a set of spatially and temporally distributed discrete values, ?, of the system characteristics. This set contains both small-scale variability that cannot be described in a model and large-scale variability that can. The spatial and temporal variability in ? are accounted for by imagining ? to be generated by a stochastic process wherein ? is normally distributed, although normality is not essential. Because ? has too large a dimension to be estimated using the data normally available, for modeling purposes ? is replaced by a smoothed or lumped approximation y?. (where y is a spatial and temporal interpolation matrix). Set y?. has the same form as the expected value of ?, y 'line' ? , where 'line' ? is the set of drift parameters of the stochastic process; ?. is a best-fit vector to ?. A model function f(?), such as a computed hydraulic head or flux, is assumed to accurately represent an actual field quantity, but the same function written using y?., f(y?.), contains error from lumping or smoothing of ? using y?.. Thus, the replacement of ? by y?. yields nonzero mean model errors of the form E(f(?)-f(y?.)) throughout the model and covariances between model errors at points throughout the model. These nonzero means and covariances are evaluated through third and fifth-order accuracy, respectively, using Taylor series expansions. They can have a significant effect on construction and interpretation of a model that is calibrated by estimating ?.. Vector ?.. is estimated as 'hat' ? using weighted nonlinear least squares techniques to fit a set of model functions f(y'hat' ?) to a. corresponding set of observations of f(?), Y. These observations are assumed to be corrupted by zero-mean, normally distributed observation errors, although, as for ?, normality is not essential. An analytical approximation of the nonlinear least squares solution is obtained using Taylor series expansions and perturbation techniques that assume model and observation errors to be small. This solution is used to evaluate biases and other results to second-order accuracy in the errors. The correct weight matrix to use in the analysis is shown to be the inverse of the second-moment matrix E(Y-f(y?.))(Y-f(y?.))', but the weight matrix is assumed to be arbitrary in most developments. The best diagonal approximation is the inverse of the matrix of diagonal elements of E(Y-f(y?.))(Y-f(y?.))', and a method of estimating this diagonal matrix when it is unknown is developed using a special objective function to compute 'hat' ?. When considered to be an estimate of f
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
The application of the sinusoidal model to lung cancer patient respiratory motion
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, R.; Vedam, S.S.; Chung, T.D.
2005-09-15
Accurate modeling of the respiratory cycle is important to account for the effect of organ motion on dose calculation for lung cancer patients. The aim of this study is to evaluate the accuracy of a respiratory model for lung cancer patients. Lujan et al. [Med. Phys. 26(5), 715-720 (1999)] proposed a model, which became widely used, to describe organ motion due to respiration. This model assumes that the parameters do not vary between and within breathing cycles. In this study, first, the correlation of respiratory motion traces with the model f(t) as a function of the parameter n(n=1,2,3) was undertakenmore » for each breathing cycle from 331 four-minute respiratory traces acquired from 24 lung cancer patients using three breathing types: free breathing, audio instruction, and audio-visual biofeedback. Because cos{sup 2} and cos{sup 4} had similar correlation coefficients, and cos{sup 2} and cos{sup 1} have a trigonometric relationship, for simplicity, the cos{sup 1} value was consequently used for further analysis in which the variations in mean position (z{sub 0}), amplitude of motion (b) and period ({tau}) with and without biofeedback or instructions were investigated. For all breathing types, the parameter values, mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) exhibited significant cycle-to-cycle variations. Audio-visual biofeedback showed the least variations for all three parameters (z{sub 0}, b, and {tau}). It was found that mean position (z{sub 0}) could be approximated with a normal distribution, and the amplitude of motion (b) and period ({tau}) could be approximated with log normal distributions. The overall probability density function (pdf) of f(t) for each of the three breathing types was fitted with three models: normal, bimodal, and the pdf of a simple harmonic oscillator. It was found that the normal and the bimodal models represented the overall respiratory motion pdfs with correlation values from 0.95 to 0.99, whereas the range of the simple harmonic oscillator pdf correlation values was 0.71 to 0.81. This study demonstrates that the pdfs of mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) can be used for sampling to obtain more realistic respiratory traces. The overall standard deviations of respiratory motion were 0.48, 0.57, and 0.55 cm for free breathing, audio instruction, and audio-visual biofeedback, respectively.« less
Human papillomavirus genotype distribution in Madrid and correlation with cytological data.
Martín, Paloma; Kilany, Linah; García, Diego; López-García, Ana M; Martín-Azaña, Ma José; Abraira, Victor; Bellas, Carmen
2011-11-15
Cervical cancer is the second most common cancer in women worldwide. Infection with certain human papillomavirus (HPV) genotypes is the most important risk factor associated with cervical cancer. This study analysed the distribution of type-specific HPV infection among women with normal and abnormal cytology, to assess the potential benefit of prophylaxis with anti-HPV vaccines. Cervical samples of 2,461 women (median age 34 years; range 15-75) from the centre of Spain were tested for HPV DNA. These included 1,656 samples with normal cytology (NC), 336 with atypical squamous cells of undetermined significance (ASCUS), 387 low-grade squamous intraepithelial lesions (LSILs), and 82 high-grade squamous intraepithelial lesions (HSILs). HPV detection and genotyping were performed by PCR using 5'-biotinylated MY09/11 consensus primers, and reverse dot blot hybridisation. HPV infection was detected in 1,062 women (43.2%). Out of these, 334 (31%) samples had normal cytology and 728 (69%) showed some cytological abnormality: 284 (27%) ASCUS, 365 (34%) LSILs, and 79 (8%) HSILs. The most common genotype found was HPV 16 (28%) with the following distribution: 21% in NC samples, 31% in ASCUS, 26% in LSILs, and 51% in HSILs. HPV 53 was the second most frequent (16%): 16% in NC, 16% in ASCUS, 19% in LSILs, and 5% in HSILs. The third genotype was HPV 31 (12%): 10% in NC, 11% in ASCUS, 14% in LSILs, and 11% in HSILs. Co-infections were found in 366 samples (34%). In 25%, 36%, 45% and 20% of samples with NC, ASCUS, LSIL and HSIL, respectively, more than one genotype was found. HPV 16 was the most frequent genotype in our area, followed by HPV 53 and 31, with a low prevalence of HPV 18 even in HSILs. The frequency of genotypes 16, 52 and 58 increased significantly from ASCUS to HSILs. Although a vaccine against HPV 16 and 18 could theoretically prevent approximately 50% of HSILs, genotypes not covered by the vaccine are frequent in our population. Knowledge of the epidemiological distribution is necessary to predict the effect of vaccines on incidence of infection and evaluate cross-protection from current vaccines against infection with other types.
Sleep patterning and behaviour in cats with pontine lesions creating REM without atonia.
Sanford; Morrison; Mann; Harris; Yoo; Ross
1994-12-01
Lesions of the dorsal pontine tegmentum release muscle tone and motor behaviour, much of it similar to orienting during wakefulness, into rapid eye movement sleep (REM), a state normally characterized by paralysis. Sleep after pontine lesions may be altered, with more REM-A episodes of shorter duration compared to normal REM. We examined behaviour, ponto-geniculo-occipital (PGO) waves (which may be central markers of orienting) and sleep in lesioned cats: (i) to characterize the relationship of PGO waves to behaviour in REM-A; (ii) to determine whether post-lesion changes in the timing and duration of REM-A episodes were due to activity-related awakenings: and (iii) to determine whether alterations in sleep changed the circadian sleep/wake cycle in cats. Behavioural release in REM-A was generally related to episode length, but episode length was not necessarily shorter than normal REM in cats capable of full locomotion in REM-A. PGO wave frequency was reduced overall during REM-A, but was higher during REM-A with behaviour than during quiet REM-A without overt behaviour. Pontine lesions did not significantly alter the circadian sleep/wake cycle: REM-A had approximately the same Light/Dark distribution as normal REM. Differences in the patterning of normal REM and REM-A within sleep involve more than mere movement-induced awakenings. Brainstem lesions that eliminate the atonia of REM may damage neural circuitry involved in REM initiation and maintenance; this circuitry is separate from circadian control mechanisms.
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
NASA Astrophysics Data System (ADS)
Cummings, Patrick
We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Artemyev, A. V., E-mail: ante0226@gmail.com; Mourenas, D.; Krasnoselskikh, V. V.
2015-06-15
In this paper, we study relativistic electron scattering by fast magnetosonic waves. We compare results of test particle simulations and the quasi-linear theory for different spectra of waves to investigate how a fine structure of the wave emission can influence electron resonant scattering. We show that for a realistically wide distribution of wave normal angles θ (i.e., when the dispersion δθ≥0.5{sup °}), relativistic electron scattering is similar for a wide wave spectrum and for a spectrum consisting in well-separated ion cyclotron harmonics. Comparisons of test particle simulations with quasi-linear theory show that for δθ>0.5{sup °}, the quasi-linear approximation describes resonantmore » scattering correctly for a large enough plasma frequency. For a very narrow θ distribution (when δθ∼0.05{sup °}), however, the effect of a fine structure in the wave spectrum becomes important. In this case, quasi-linear theory clearly fails in describing accurately electron scattering by fast magnetosonic waves. We also study the effect of high wave amplitudes on relativistic electron scattering. For typical conditions in the earth's radiation belts, the quasi-linear approximation cannot accurately describe electron scattering for waves with averaged amplitudes >300 pT. We discuss various applications of the obtained results for modeling electron dynamics in the radiation belts and in the Earth's magnetotail.« less
N-body dark matter haloes with simple hierarchical histories
NASA Astrophysics Data System (ADS)
Jiang, Lilian; Helly, John C.; Cole, Shaun; Frenk, Carlos S.
2014-05-01
We present a new algorithm which groups the subhaloes found in cosmological N-body simulations by structure finders such as SUBFIND into dark matter haloes whose formation histories are strictly hierarchical. One advantage of these `Dhaloes' over the commonly used friends-of-friends (FoF) haloes is that they retain their individual identity in the cases when FoF haloes are artificially merged by tenuous bridges of particles or by an overlap of their outer diffuse haloes. Dhaloes are thus well suited for modelling galaxy formation and their merger trees form the basis of the Durham semi-analytic galaxy formation model, GALFORM. Applying the Dhalo construction to the Λ cold dark matter Millennium II Simulation, we find that approximately 90 per cent of Dhaloes have a one-to-one, bijective match with a corresponding FoF halo. The remaining 10 per cent are typically secondary components of large FoF haloes. Although the mass functions of both types of haloes are similar, the mass of Dhaloes correlates much more tightly with the virial mass, M200, than FoF haloes. Approximately 80 per cent of FoF and bijective and non-bijective Dhaloes are relaxed according to standard criteria. For these relaxed haloes, all three types have similar concentration-M200 relations and, at fixed mass, the concentration distributions are described accurately by log-normal distributions.
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
NASA Astrophysics Data System (ADS)
Ren, Jianlin; Cao, Xiaodong; Liu, Junjie
2018-04-01
Passengers usually spend hours in the airport terminal buildings waiting for their departure. During the long waiting period, ambient fine particles (PM2.5) and ultrafine particles (UFP) generated by airliners may penetrate into terminal buildings through open doors and the HVAC system. However, limited data are available on passenger exposure to particulate pollutants in terminal buildings. We conducted on-site measurements on PM2.5 and UFP concentration and the particle size distribution in the terminal building of Tianjin Airport, China during three different seasons. The results showed that the PM2.5 concentrations in the terminal building were considerably larger than the values guided by Chinese standard and WHO on all of the tested seasons, and the conditions were significantly affected by the outdoor air (Spearman test, p < 0.01). The indoor/outdoor PM2.5 ratios (I/O) ranged from 0.67 to 0.84 in the arrival hall and 0.79 to 0.96 in the departure hall. The particle number concentration in the terminal building presented a bi-modal size distribution, with one mode being at 30 nm and another mode at 100 nm. These results were totally different from the size distribution measured in a normal urban environment. The total UFP exposure during the whole waiting period (including in the terminal building and airliner cabin) of a passenger is approximately equivalent to 11 h of exposure to normal urban environments. This study is expected to contribute to the improvement of indoor air quality and health of passengers in airport terminal buildings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu
2015-05-15
Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2015-05-01
Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
On Nonequivalence of Several Procedures of Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2005-01-01
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…
Forces and moments on a slender, cavitating body
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailey, C.E.; Clark, E.L.; Buffington, R.J.
1988-01-01
Recently a numerical code has been developed at Sandia National Laboratories to predict the pitching moment, normal force, and axial force of a slender, supercavitating shape. The potential flow about the body and cavity is calculated using an axial distribution of source/sink elements. The cavity surface is assumed to be a constant pressure streamline, extending beyond the base of the model. Slender body approximation is used to model the crossflow for small angles of attack. A significant extension of previous work in cavitation flow is the inclusion of laminar and turbulent boundary layer solutions on the body. Predictions with thismore » code, for axial force at zero angle of attack, show good agreement with experiments. There are virtually no published data availble with which to benchmark the pitching moment and normal force predictions. An experiment was designed to measure forces and moments on a supercavitation shape. The primary reason for the test was to obtain much needed data to benchmark the hydrodynamic force and moment predictions. Since the numerical prediction is for super cavitating shapes at very small cavitation numbers, the experiment was designed to be a ventilated cavity test. This paper describes the experimental procedure used to measure the pitching moment, axial and normal forces, and base pressure on a slender body with a ventilated cavity. Limited results are presented for pitching moment and normal force. 5 refs., 7 figs.« less
Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang
2015-03-27
Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors.
Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang
2015-01-01
Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors. PMID:25825975
Hamiltonian Analysis of Subcritical Stochastic Epidemic Dynamics
2017-01-01
We extend a technique of approximation of the long-term behavior of a supercritical stochastic epidemic model, using the WKB approximation and a Hamiltonian phase space, to the subcritical case. The limiting behavior of the model and approximation are qualitatively different in the subcritical case, requiring a novel analysis of the limiting behavior of the Hamiltonian system away from its deterministic subsystem. This yields a novel, general technique of approximation of the quasistationary distribution of stochastic epidemic and birth-death models and may lead to techniques for analysis of these models beyond the quasistationary distribution. For a classic SIS model, the approximation found for the quasistationary distribution is very similar to published approximations but not identical. For a birth-death process without depletion of susceptibles, the approximation is exact. Dynamics on the phase plane similar to those predicted by the Hamiltonian analysis are demonstrated in cross-sectional data from trachoma treatment trials in Ethiopia, in which declining prevalences are consistent with subcritical epidemic dynamics. PMID:28932256
Robustness of location estimators under t-distributions: a literature review
NASA Astrophysics Data System (ADS)
Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.
2017-03-01
The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
Empirical analysis on the runners' velocity distribution in city marathons
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Meng, Fan
2018-01-01
In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Xu, Jianhua; Morris, Lynsie; Fliesler, Steven J; Sherry, David M; Ding, Xi-Qin
2011-06-01
To investigate the progression of cone dysfunction and degeneration in CNG channel subunit CNGB3 deficiency. Retinal structure and function in CNGB3(-/-) and wild-type (WT) mice were evaluated by electroretinography (ERG), lectin cytochemistry, and correlative Western blot analysis of cone-specific proteins. Cone and rod terminal integrity was assessed by electron microscopy and synaptic protein immunohistochemical distribution. Cone ERG amplitudes (photopic b-wave) in CNGB3(-/-) mice were reduced to approximately 50% of WT levels by postnatal day 15, decreasing further to approximately 30% of WT levels by 1 month and to approximately 20% by 12 months of age. Rod ERG responses (scotopic a-wave) were not affected in CNGB3(-/-) mice. Average CNGB3(-/-) cone densities were approximately 80% of WT levels at 1 month and declined slowly thereafter to only approximately 50% of WT levels by 12 months. Expression levels of M-opsin, cone transducin α-subunit, and cone arrestin in CNGB3(-/-) mice were reduced by 50% to 60% by 1 month and declined to 35% to 45% of WT levels by 9 months. In addition, cone opsin mislocalized to the outer nuclear layer and the outer plexiform layer in the CNGB3(-/-) retina. Cone and rod synaptic marker expression and terminal ultrastructure were normal in the CNGB3(-/-) retina. These findings are consistent with an early-onset, slow progression of cone functional defects and cone loss in CNGB3(-/-) mice, with the cone signaling deficits arising from disrupted phototransduction and cone loss rather than from synaptic defects.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Rafal Podlaski; Francis .A. Roesch
2013-01-01
The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...
S-Wave Normal Mode Propagation in Aluminum Cylinders
Lee, Myung W.; Waite, William F.
2010-01-01
Large amplitude waveform features have been identified in pulse-transmission shear-wave measurements through cylinders that are long relative to the acoustic wavelength. The arrival times and amplitudes of these features do not follow the predicted behavior of well-known bar waves, but instead they appear to propagate with group velocities that increase as the waveform feature's dominant frequency increases. To identify these anomalous features, the wave equation is solved in a cylindrical coordinate system using an infinitely long cylinder with a free surface boundary condition. The solution indicates that large amplitude normal-mode propagations exist. Using the high-frequency approximation of the Bessel function, an approximate dispersion relation is derived. The predicted amplitude and group velocities using the approximate dispersion relation qualitatively agree with measured values at high frequencies, but the exact dispersion relation should be used to analyze normal modes for full ranges of frequency of interest, particularly at lower frequencies.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.
2015-01-01
Constituent properties, along with volume fraction, have a first order effect on the microscale fields within a composite material and influence the macroscopic response. Therefore, there is a need to assess the significance of stochastic variation in the constituent properties of composites at the higher scales. The effect of variability in the parameters controlling the time-dependent behavior, in a unidirectional SCS-6 SiC fiber-reinforced RBSN matrix composite lamina, on the residual stresses induced during processing is investigated numerically. The generalized method of cells micromechanics theory is utilized to model the ceramic matrix composite lamina using a repeating unit cell. The primary creep phases of the constituents are approximated using a Norton-Bailey, steady state, power law creep model. The effect of residual stresses on the proportional limit stress and strain to failure of the composite is demonstrated. Monte Carlo simulations were conducted using a normal distribution for the power law parameters and the resulting residual stress distributions were predicted.
Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.
Guo, Tianjiao; Englehardt, James D; Fallon, Howard J
While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
Acoustic scattering by arbitrary distributions of disjoint, homogeneous cylinders or spheres.
Hesford, Andrew J; Astheimer, Jeffrey P; Waag, Robert C
2010-05-01
A T-matrix formulation is presented to compute acoustic scattering from arbitrary, disjoint distributions of cylinders or spheres, each with arbitrary, uniform acoustic properties. The generalized approach exploits the similarities in these scattering problems to present a single system of equations that is easily specialized to cylindrical or spherical scatterers. By employing field expansions based on orthogonal harmonic functions, continuity of pressure and normal particle velocity are directly enforced at each scatterer using diagonal, analytic expressions to eliminate the need for integral equations. The effect of a cylinder or sphere that encloses all other scatterers is simulated with an outer iterative procedure that decouples the inner-object solution from the effect of the enclosing object to improve computational efficiency when interactions among the interior objects are significant. Numerical results establish the validity and efficiency of the outer iteration procedure for nested objects. Two- and three-dimensional methods that employ this outer iteration are used to measure and characterize the accuracy of two-dimensional approximations to three-dimensional scattering of elevation-focused beams.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Normalization of Gravitational Acceleration Models
NASA Technical Reports Server (NTRS)
Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.
2011-01-01
Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.
Elastic-Tether Suits for Artificial Gravity and Exercise
NASA Technical Reports Server (NTRS)
Torrance, Paul; Biesinger, Paul; Rybicki, Daniel D.
2005-01-01
Body suits harnessed to systems of elastic tethers have been proposed as means of approximating the effects of normal Earth gravitation on crewmembers of spacecraft in flight to help preserve the crewmembers physical fitness. The suits could also be used on Earth to increase effective gravitational loads for purposes of athletic training. The suit according to the proposal would include numerous small tether-attachment fixtures distributed over its outer surface so as to distribute the artificial gravitational force as nearly evenly as possible over the wearer s body. Elastic tethers would be connected between these fixtures and a single attachment fixture on a main elastic tether that would be anchored to a fixture on or under a floor. This fixture might include multiple pulleys to make the effective length of the main tether great enough that normal motions of the wearer cause no more than acceptably small variations in the total artificial gravitational force. Among the problems in designing the suit would be equalizing the load in the shoulder area and keeping tethers out of the way below the knees to prevent tripping. The solution would likely include running tethers through rings on the sides. Body suits with a weight or water ballast system are also proposed for very slight spinning space-station scenarios, in which cases the proposed body suits will easily be able to provide the equivalency of a 1-G or even greater load.
Kearse, K P; Smith, N L; Semer, D A; Eagles, L; Finley, J L; Kazmierczak, S; Kovacs, C J; Rodriguez, A A; Kellogg-Wennerberg, A E
2000-12-15
A newly developed murine monoclonal antibody, DS6, immunohistochemically reacts with an antigen, CA6, that is expressed by human serous ovarian carcinomas but not by normal ovarian surface epithelium or mesothelium. CA6 has a limited distribution in normal adult tissues and is most characteristically detected in fallopian tube epithelium, inner urothelium and type 2 pneumocytes. Pre-treatment of tissue sections with either periodic acid or neuraminidase from Vibrio cholerae abolishes immunoreactivity with DS6, indicating that CA6 is a neuraminidase-sensitive and periodic acid-sensitive sialic acid glycoconjugate ("sialoglycotope"). SDS-PAGE of OVCAR5 cell lysates has revealed that the CA6 epitope is expressed on an 80 kDa non-disulfide-linked glycoprotein containing N-linked oligosaccharides. Two-dimensional non-equilibrium pH gradient gel electrophoresis indicates an isoelectric point of approximately 6.2 to 6.5. Comparison of the immunohistochemical distribution of CA6 in human serous ovarian adenocarcinomas has revealed similarities to that of CA125; however, distinct differences and some complementarity of antigen expression were revealed by double-label, 2-color immunohistochemical studies. The DS6-detected CA6 antigen appears to be distinct from other well-characterized tumor-associated antigens, including MUC1, CA125 and the histo-blood group-related antigens sLea, sLex and sTn. Copyright 2000 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Tessler, A.; Annett, M. S.; Gendron, G.
2001-01-01
A {1,2}-order theory for laminated composite and sandwich plates is extended to include thermoelastic effects. The theory incorporates all three-dimensional strains and stresses. Mixed-field assumptions are introduced which include linear in-plane displacements, parabolic transverse displacement and shear strains, and a cubic distribution of the transverse normal stress. Least squares strain compatibility conditions and exact traction boundary conditions are enforced to yield higher polynomial degree distributions for the transverse shear strains and transverse normal stress through the plate thickness. The principle of virtual work is used to derive a 10th-order system of equilibrium equations and associated Poisson boundary conditions. The predictive capability of the theory is demonstrated using a closed-form analytic solution for a simply-supported rectangular plate subjected to a linearly varying temperature field across the thickness. Several thin and moderately thick laminated composite and sandwich plates are analyzed. Numerical comparisons are made with corresponding solutions of the first-order shear deformation theory and three-dimensional elasticity theory. These results, which closely approximate the three-dimensional elasticity solutions, demonstrate that through - the - thickness deformations even in relatively thin and, especially in thick. composite and sandwich laminates can be significant under severe thermal gradients. The {1,2}-order kinematic assumptions insure an overall accurate theory that is in general superior and, in some cases, equivalent to the first-order theory.
Zhang, Y. -W.; Long, E.; Mihovilovič, M.; ...
2015-10-22
We report the first measurement of the target single-spin asymmetry, Ay, in quasi-elastic scattering from the inclusive reaction 3He↑ (e,e') on a 3He gas target polarized normal to the lepton scattering plane. Assuming time-reversal invariance, this asymmetry is strictly zero for one-photon exchange. A non-zero A y can arise from the interference between the one- and two-photon exchange processes which is sensitive to the details of the sub-structure of the nucleon. An experiment recently completed at Jefferson Lab yielded asymmetries with high statistical precision at Q 2= 0.13, 0.46 and 0.97 GeV 2. These measurements demonstrate, for the first time,more » that the 3He asymmetry is clearly non-zero and negative with a statistical significance of (8-10)σ. Using measured proton-to- 3He cross-section ratios and the effective polarization approximation, neutron asymmetries of -(1-3)% were obtained. The neutron asymmetry at high Q 2 is related to moments of the Generalized Parton Distributions (GPDs). Our measured neutron asymmetry at Q 2=0.97 GeV 2 agrees well with a prediction based on two-photon exchange using a GPD model and in addition provides a new independent constraint on these distributions.« less
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Modeling Error Distributions of Growth Curve Models through Bayesian Methods
ERIC Educational Resources Information Center
Zhang, Zhiyong
2016-01-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Technical Reports Server (NTRS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-01-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc(exp -1). The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h(exp -1) Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h(exp -1) Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h(exp -1) Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambda(sub zero) = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h(exp -1) Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma(sub 8) (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h(exp -1) Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the pwer spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have M(sub lim) greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Abnormal lignin in a loblolly pine mutant.
Ralph, J; MacKay, J J; Hatfield, R D; O'Malley, D M; Whetten, R W; Sederoff, R R
1997-07-11
Novel lignin is formed in a mutant loblolly pine (Pinus taeda L.) severely depleted in cinnamyl alcohol dehydrogenase (E.C. 1.1.1.195), which converts coniferaldehyde to coniferyl alcohol, the primary lignin precursor in pines. Dihydroconiferyl alcohol, a monomer not normally associated with the lignin biosynthetic pathway, is the major component of the mutant's lignin, accounting for approximately 30 percent (versus approximately 3 percent in normal pine) of the units. The level of aldehydes, including new 2-methoxybenzaldehydes, is also increased. The mutant pines grew normally indicating that, even within a species, extensive variations in lignin composition need not disrupt the essential functions of lignin.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
NASA Astrophysics Data System (ADS)
Kyutt, R. N.
2018-04-01
The three-wave X-ray diffraction in strongly disordered epitaxial layers of GaN and ZnO is experimentally investigated. The charts of the intensity distribution in the reciprocal space are plotted in coordinates q θ and q ϕ for the most intensive three-wave combination (1010)/(1011) by means of subsequent θ- and ϕ-scanning. A nontrivial shape of the θ-sections of these contours at a distance from the ϕ center of reflection is revealed; it is different for different samples. For the θ-curves at the center of reflection, we observed a common peak that may be approximated by the Voigt function with a power-low decrease in the intensity at the wings; the decrease law (from-4.5 to-5.0) is found to be considerably greater than that for the similar curves of two-wave diffraction and not depending on the dislocation density and distribution in layers. In some films we observed a coarse-block structure; in addition, it follows from the distribution in the reciprocal space that these blocks are turned with respect to each other around a normal to the surface, which allows us to suggest the existence of low-angle boundaries between them, consisting exclusively of edge dislocations.
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; ...
2017-07-11
Normalized double-differential cross sections for top quark pair (more » $$\\mathrm{t}\\overline{\\mathrm{t}}$$ ) production are measured in pp collisions at a centre-of-mass energy of 8 $$\\,\\text {TeV}$$ with the CMS experiment at the LHC. The analyzed data correspond to an integrated luminosity of 19.7 $$\\,\\text {fb}^{-1}$$ . The measurement is performed in the dilepton $$\\mathrm {e}^{\\pm }\\mu ^{\\mp }$$ final state. The $$\\mathrm{t}\\overline{\\mathrm{t}}$$ cross section is determined as a function of various pairs of observables characterizing the kinematics of the top quark and $$\\mathrm{t}\\overline{\\mathrm{t}}$$ system. The data are compared to calculations using perturbative quantum chromodynamics at next-to-leading and approximate next-to-next-to-leading orders. They are also compared to predictions of Monte Carlo event generators that complement fixed-order computations with parton showers, hadronization, and multiple-parton interactions. Overall agreement is observed with the predictions, which is improved when the latest global sets of proton parton distribution functions are used. Lastly, the inclusion of the measured $$\\mathrm{t}\\overline{\\mathrm{t}}$$ cross sections in a fit of parametrized parton distribution functions is shown to have significant impact on the gluon distribution.« less
Nonlinear subdiffusive fractional equations and the aggregation phenomenon.
Fedotov, Sergei
2013-09-01
In this article we address the problem of the nonlinear interaction of subdiffusive particles. We introduce the random walk model in which statistical characteristics of a random walker such as escape rate and jump distribution depend on the mean density of particles. We derive a set of nonlinear subdiffusive fractional master equations and consider their diffusion approximations. We show that these equations describe the transition from an intermediate subdiffusive regime to asymptotically normal advection-diffusion transport regime. This transition is governed by nonlinear tempering parameter that generalizes the standard linear tempering. We illustrate the general results through the use of the examples from cell and population biology. We find that a nonuniform anomalous exponent has a strong influence on the aggregation phenomenon.
Computation of leading edge film cooling from a CONSOLE geometry (CONverging Slot hOLE)
NASA Astrophysics Data System (ADS)
Guelailia, A.; Khorsi, A.; Hamidou, M. K.
2016-01-01
The aim of this study is to investigate the effect of mass flow rate on film cooling effectiveness and heat transfer over a gas turbine rotor blade with three staggered rows of shower-head holes which are inclined at 30° to the spanwise direction, and are normal to the streamwise direction on the blade. To improve film cooling effectiveness, the standard cylindrical holes, located on the leading edge region, are replaced with the converging slot holes (console). The ANSYS CFX has been used for this computational simulation. The turbulence is approximated by a k-ɛ model. Detailed film effectiveness distributions are presented for different mass flow rate. The numerical results are compared with experimental data.
X-Ray Diffraction Wafer Mapping Method for Rhombohedral Super-Hetero-Epitaxy
NASA Technical Reports Server (NTRS)
Park, Yoonjoon; Choi, Sang Hyouk; King, Glen C.; Elliott, James R.; Dimarcantonio, Albert L.
2010-01-01
A new X-ray diffraction (XRD) method is provided to acquire XY mapping of the distribution of single crystals, poly-crystals, and twin defects across an entire wafer of rhombohedral super-hetero-epitaxial semiconductor material. In one embodiment, the method is performed with a point or line X-ray source with an X-ray incidence angle approximating a normal angle close to 90 deg, and in which the beam mask is preferably replaced with a crossed slit. While the wafer moves in the X and Y direction, a narrowly defined X-ray source illuminates the sample and the diffracted X-ray beam is monitored by the detector at a predefined angle. Preferably, the untilted, asymmetric scans are of {440} peaks, for twin defect characterization.
Ultraviolet photometry of the eclipsing variable CW Cephei
NASA Technical Reports Server (NTRS)
Sobieski, S.
1972-01-01
A series of photometric observations was made of the eclipsing variable CW Cephei on the OAO 2. Approximate elements were derived from the eclipse depths and shape of the secondary. Persistent asymmetries and anomalous light variations, larger than the expected experimental error, were also found, subsequent ground-based observations show H alpha entirely in emission, indicating the presence of an extended gaseous system surrounding one or both components. A detailed comparison was made of the flux distribution of the binary relative to that for the nominally unreddened stars delta Pic, BlIII, and eta Aur, B3V, to investigate the effects of interstellar extinction. The resultant extinction curves, normalized at a wavelength of 3330 A, show a relatively smooth increase with decreasing wavelength.
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Reliable Function Approximation and Estimation
2016-08-16
AUSTIN , TX 78712 08/16/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research Laboratory AF Office Of Scientific...UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS...AFRL-AFOSR-VA-TR-2016-0293 Reliable Function Approximation and Estimation Rachel Ward UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308
Computational Study of Thrombus Formation and Clotting Factor Effects under Venous Flow Conditions
Govindarajan, Vijay; Rakesh, Vineet; Reifman, Jaques; Mitrophanov, Alexander Y.
2016-01-01
A comprehensive understanding of thrombus formation as a physicochemical process that has evolved to protect the integrity of the human vasculature is critical to our ability to predict and control pathological states caused by a malfunctioning blood coagulation system. Despite numerous investigations, the spatial and temporal details of thrombus growth as a multicomponent process are not fully understood. Here, we used computational modeling to investigate the temporal changes in the spatial distributions of the key enzymatic (i.e., thrombin) and structural (i.e., platelets and fibrin) components within a growing thrombus. Moreover, we investigated the interplay between clot structure and its mechanical properties, such as hydraulic resistance to flow. Our model relied on the coupling of computational fluid dynamics and biochemical kinetics, and was validated using flow-chamber data from a previous experimental study. The model allowed us to identify the distinct patterns characterizing the spatial distributions of thrombin, platelets, and fibrin accumulating within a thrombus. Our modeling results suggested that under the simulated conditions, thrombin kinetics was determined predominantly by prothrombinase. Furthermore, our simulations showed that thrombus resistance imparted by fibrin was ∼30-fold higher than that imparted by platelets. Yet, thrombus-mediated bloodflow occlusion was driven primarily by the platelet deposition process, because the height of the platelet accumulation domain was approximately twice that of the fibrin accumulation domain. Fibrinogen supplementation in normal blood resulted in a nonlinear increase in thrombus resistance, and for a supplemented fibrinogen level of 48%, the thrombus resistance increased by ∼2.7-fold. Finally, our model predicted that restoring the normal levels of clotting factors II, IX, and X while simultaneously restoring fibrinogen (to 88% of its normal level) in diluted blood can restore fibrin generation to ∼78% of its normal level and hence improve clot formation under dilution. PMID:27119646
Distributed plasticity of locomotor pattern generators in spinal cord injured patients.
Grasso, Renato; Ivanenko, Yuri P; Zago, Myrka; Molinari, Marco; Scivoletto, Giorgio; Castellano, Vincenzo; Macellari, Velio; Lacquaniti, Francesco
2004-05-01
Recent progress with spinal cord injured (SCI) patients indicates that with training they can recover some locomotor ability. Here we addressed the question of whether locomotor responses developed with training depend on re-activation of the normal motor patterns or whether they depend on learning new motor patterns. To this end we recorded detailed kinematic and EMG data in SCI patients trained to step on a treadmill with body-weight support (BWST), and in healthy subjects. We found that all patients could be trained to step with BWST in the laboratory conditions, but they used new coordinative strategies. Patients with more severe lesions used their arms and body to assist the leg movements via the biomechanical coupling of limb and body segments. In all patients, the phase-relationship of the angular motion of the different lower limb segments was very different from the control, as was the pattern of activity of most recorded muscles. Surprisingly, however, the new motor strategies were quite effective in generating foot motion that closely matched the normal in the laboratory conditions. With training, foot motion recovered the shape, the step-by-step reproducibility, and the two-thirds power relationship between curvature and velocity that characterize normal gait. We mapped the recorded patterns of muscle activity onto the approximate rostrocaudal location of motor neuron pools in the human spinal cord. The reconstructed spatiotemporal maps of motor neuron activity in SCI patients were quite different from those of healthy subjects. At the end of training, the locomotor network reorganized at both supralesional and sublesional levels, from the cervical to the sacral cord segments. We conclude that locomotor responses in SCI patients may not be subserved by changes localized to limited regions of the spinal cord, but may depend on a plastic redistribution of activity across most of the rostrocaudal extent of the spinal cord. Distributed plasticity underlies recovery of foot kinematics by generating new patterns of muscle activity that are motor equivalents of the normal ones.
Characterizing the D2 statistic: word matches in biological sequences.
Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J
2009-01-01
Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.
Short-term preservation of porcine oocytes in ambient temperature: novel approaches.
Yang, Cai-Rong; Miao, De-Qiang; Zhang, Qing-Hua; Guo, Lei; Tong, Jing-Shan; Wei, Yanchang; Huang, Xin; Hou, Yi; Schatten, Heide; Liu, ZhongHua; Sun, Qing-Yuan
2010-12-07
The objective of this study was to evaluate the feasibility of preserving porcine oocytes without freezing. To optimize preservation conditions, porcine cumulus-oocyte complexes (COCs) were preserved in TCM-199, porcine follicular fluid (pFF) and FCS at different temperatures (4°C, 20°C, 25°C, 27.5°C, 30°C and 38.5°C) for 1 day, 2 days or 3 days. After preservation, oocyte morphology, germinal vesicle (GV) rate, actin cytoskeleton organization, cortical granule distribution, mitochondrial translocation and intracellular glutathione level were evaluated. Oocyte maturation was indicated by first polar body emission and spindle morphology after in vitro culture. Strikingly, when COCs were stored at 27.5°C for 3 days in pFF or FCS, more than 60% oocytes were still arrested at the GV stage and more than 50% oocytes matured into MII stages after culture. Almost 80% oocytes showed normal actin organization and cortical granule relocation to the cortex, and approximately 50% oocytes showed diffused mitochondria distribution patterns and normal spindle configurations. While stored in TCM-199, all these criteria decreased significantly. Glutathione (GSH) level in the pFF or FCS group was higher than in the TCM-199 group, but lower than in the non-preserved control group. The preserved oocytes could be fertilized and developed to blastocysts (about 10%) with normal cell number, which is clear evidence for their retaining the developmental potentiality after 3d preservation. Thus, we have developed a simple method for preserving immature pig oocytes at an ambient temperature for several days without evident damage of cytoplasm and keeping oocyte developmental competence.
Cadena, Carlos Daniel; Zapata, Felipe; Jiménez, Iván
2018-03-01
Progress in the development and use of methods for species delimitation employing phenotypic data lags behind conceptual and practical advances in molecular genetic approaches. The basic evolutionary model underlying the use of phenotypic data to delimit species assumes random mating and quantitative polygenic traits, so that phenotypic distributions within a species should be approximately normal for individuals of the same sex and age. Accordingly, two or more distinct normal distributions of phenotypic traits suggest the existence of multiple species. In light of this model, we show that analytical approaches employed in taxonomic studies using phenotypic data are often compromised by three issues: 1) reliance on graphical analyses that convey little information on phenotype frequencies; 2) exclusion of characters potentially important for species delimitation following reduction of data dimensionality; and 3) use of measures of central tendency to evaluate phenotypic distinctiveness. We outline approaches to overcome these issues based on statistical developments related to normal mixture models (NMMs) and illustrate them empirically with a reanalysis of morphological data recently used to claim that there are no morphologically distinct species of Darwin's ground-finches (Geospiza). We found negligible support for this claim relative to taxonomic hypotheses recognizing multiple species. Although species limits among ground-finches merit further assessments using additional sources of information, our results bear implications for other areas of inquiry including speciation research: because ground-finches have likely speciated and are not trapped in a process of "Sisyphean" evolution as recently argued, they remain useful models to understand the evolutionary forces involved in speciation. Our work underscores the importance of statistical approaches grounded on appropriate evolutionary models for species delimitation. We discuss how NMMs offer new perspectives in the kind of inferences available to systematists, with significant repercussions on ideas about the phenotypic structure of biodiversity.
Poyant, Janelle O; Albright, Robert; Clain, Jeremy; Pandompatam, Govind; Barreto, Erin F
2017-11-10
Butalbital is a small molecule (approximately 220 Da), with 26% protein binding, a 0.8 L/kg volume of distribution, and is eliminated nearly 80% unchanged in the urine. Although hemodialysis has been used to treat overdoses of other barbiturates, the extracorporeal clearance of butalbital is unknown. The objective of this case is to describe the use of extracorporeal therapy to augment elimination of butalbital after an overdose of aspirin 325 mg-butalbital 50 mg-caffeine 40 mg with codeine 30 mg (Fiorinal with Codeine). This is a case report of a single patient. A 67-year-old female was admitted to the medical intensive care unit approximately 3 h after ingestion of 40 tablets of Fiorinal with Codeine. Her presentation was notable for a decline in mental status, preserved renal function and a relatively low peak salicylate concentration at 46.4 mg/dL (3.4 mmol/L). Approximately 8 h after ingestion of 2000 mg of butalbital, our patient's serum concentration was 26.9 mg/L (normal <10 mg/L). At the end of a four-hour hemodialysis session, the total body elimination of butalbital was approximately 60% which corresponded to an intradialytic clearance of 233-300 mL/min. The extracorporeal clearance of butalbital observed in this case demonstrates the utility of dialysis to augment drug elimination in a Fiorinal with Codeine overdose.
Bono, Roser; Blanca, María J.; Arnau, Jaume; Gómez-Benito, Juana
2017-01-01
Statistical analysis is crucial for research and the choice of analytical technique should take into account the specific distribution of data. Although the data obtained from health, educational, and social sciences research are often not normally distributed, there are very few studies detailing which distributions are most likely to represent data in these disciplines. The aim of this systematic review was to determine the frequency of appearance of the most common non-normal distributions in the health, educational, and social sciences. The search was carried out in the Web of Science database, from which we retrieved the abstracts of papers published between 2010 and 2015. The selection was made on the basis of the title and the abstract, and was performed independently by two reviewers. The inter-rater reliability for article selection was high (Cohen’s kappa = 0.84), and agreement regarding the type of distribution reached 96.5%. A total of 262 abstracts were included in the final review. The distribution of the response variable was reported in 231 of these abstracts, while in the remaining 31 it was merely stated that the distribution was non-normal. In terms of their frequency of appearance, the most-common non-normal distributions can be ranked in descending order as follows: gamma, negative binomial, multinomial, binomial, lognormal, and exponential. In addition to identifying the distributions most commonly used in empirical studies these results will help researchers to decide which distributions should be included in simulation studies examining statistical procedures. PMID:28959227
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Pharmacokinetics of isotretinoin and its major blood metabolite following a single oral dose to man.
Colburn, W A; Vane, F M; Shorter, H J
1983-01-01
A pharmacokinetic profile of isotretinoin and its major dermatologically active blood metabolite, 4-oxo-isotretinoin, was developed following a single 80 mg oral suspension dose of isotretinoin to 15 normal male subjects. Blood samples were assayed for isotretinoin and 4-oxo-isotretinoin using a newly developed reverse-phase HPLC method. Following rapid absorption from the suspension formulation, isotretinoin is distributed and eliminated with harmonic mean half-lives of 1.3 and 17.4 h, respectively. Maximum concentrations of isotretinoin in blood were observed at 1 to 4 h after dosing. Maximum concentrations of the major blood metabolite of isotretinoin, 4-oxo-isotretinoin, are approximately one-half those of isotretinoin and occur at 6 to 16 h after isotretinoin dosing. The ratio of areas under the curve for metabolite and parent drug following the single dose suggests that average steady-state ratios of metabolite to parent drug during a dosing interval will be approximately 2.5. Both isotretinoin and its metabolite can be adequately described using a single linear pharmacokinetic model.
Rasheed, Tabish; Ahmad, Shabbir
2010-10-01
Ab initio Hartree-Fock (HF), density functional theory (DFT) and second-order Møller-Plesset (MP2) methods were used to perform harmonic and anharmonic calculations for the biomolecule cytosine and its deuterated derivative. The anharmonic vibrational spectra were computed using the vibrational self-consistent field (VSCF) and correlation-corrected vibrational self-consistent field (CC-VSCF) methods. Calculated anharmonic frequencies have been compared with the argon matrix spectra reported in literature. The results were analyzed with focus on the properties of anharmonic couplings between pair of modes. A simple and easy to use formula for calculation of mode-mode coupling magnitudes has been derived. The key element in present approach is the approximation that only interactions between pairs of normal modes have been taken into account, while interactions of triples or more are neglected. FTIR and Raman spectra of solid state cytosine have been recorded in the regions 400-4000 cm(-1) and 60-4000 cm(-1), respectively. Vibrational analysis and assignments are based on calculated potential energy distribution (PED) values. Copyright 2010 Elsevier B.V. All rights reserved.
IUE results on the AM Herculis stars CW 1103, E1114, and PG 1550
NASA Technical Reports Server (NTRS)
Szkody, P.; Liebert, J.; Panek, R. J.
1985-01-01
IUE data are presented on three AM Her stars (CW 1103 + 254, E1114 + 182, and PG 1550 + 191) which are used in conjunction with optical and IR fluxes to study the accretion characteristics of these systems in relation to other polars.The time-resolved IUE spectra of CW 1103 show that the column contributes little to the UV, while the white dwarf, with a temperature of approximately 13,000 K and a distance of approximately 140 pc, is the dominant source of light. Thus, CW 1103 at its normal state is basically very similar to VV Pup at its low accretion state, except for increased IR emission that is not connected to the accretion column or the secondary. E1114 also appears to be a low UV emitter, but better data are needed to constrain the observed temperature. On the other hand, PG 1550 has a steeper UV distribution, with a possibility for a hot Rayleigh-Jeans component at wavelengths less than 1600 A. This source is very similar to E1405-451 and AM Her itself.
Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya
2002-04-01
In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Ho, Andrew D; Yu, Carol C
2015-06-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Collective Human Mobility Pattern from Taxi Trips in Urban Area
Peng, Chengbin; Jin, Xiaogang; Wong, Ka-Chun; Shi, Meixia; Liò, Pietro
2012-01-01
We analyze the passengers' traffic pattern for 1.58 million taxi trips of Shanghai, China. By employing the non-negative matrix factorization and optimization methods, we find that, people travel on workdays mainly for three purposes: commuting between home and workplace, traveling from workplace to workplace, and others such as leisure activities. Therefore, traffic flow in one area or between any pair of locations can be approximated by a linear combination of three basis flows, corresponding to the three purposes respectively. We name the coefficients in the linear combination as traffic powers, each of which indicates the strength of each basis flow. The traffic powers on different days are typically different even for the same location, due to the uncertainty of the human motion. Therefore, we provide a probability distribution function for the relative deviation of the traffic power. This distribution function is in terms of a series of functions for normalized binomial distributions. It can be well explained by statistical theories and is verified by empirical data. These findings are applicable in predicting the road traffic, tracing the traffic pattern and diagnosing the traffic related abnormal events. These results can also be used to infer land uses of urban area quite parsimoniously. PMID:22529917
The Node Deployment of Intelligent Sensor Networks Based on the Spatial Difference of Farmland Soil.
Liu, Naisen; Cao, Weixing; Zhu, Yan; Zhang, Jingchao; Pang, Fangrong; Ni, Jun
2015-11-11
Considering that agricultural production is characterized by vast areas, scattered fields and long crop growth cycles, intelligent wireless sensor networks (WSNs) are suitable for monitoring crop growth information. Cost and coverage are the most key indexes for WSN applications. The differences in crop conditions are influenced by the spatial distribution of soil nutrients. If the nutrients are distributed evenly, the crop conditions are expected to be approximately uniform with little difference; on the contrary, there will be great differences in crop conditions. In accordance with the differences in the spatial distribution of soil information in farmland, fuzzy c-means clustering was applied to divide the farmland into several areas, where the soil fertility of each area is nearly uniform. Then the crop growth information in the area could be monitored with complete coverage by deploying a sensor node there, which could greatly decrease the deployed sensor nodes. Moreover, in order to accurately judge the optimal cluster number of fuzzy c-means clustering, a discriminant function for Normalized Intra-Cluster Coefficient of Variation (NICCV) was established. The sensitivity analysis indicates that NICCV is insensitive to the fuzzy weighting exponent, but it shows a strong sensitivity to the number of clusters.
Financial derivative pricing under probability operator via Esscher transfomation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achi, Godswill U., E-mail: achigods@yahoo.com
2014-10-24
The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing{sup 9} where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula usingmore » the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φ{sub x}(u) of X{sub t} we recuperate the Black-Scholes formula for financial derivative prices.« less
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
Investigation of wall-bounded turbulence over regularly distributed roughness
NASA Astrophysics Data System (ADS)
Placidi, Marco; Ganapathisubramani, Bharathram
2012-11-01
The effects of regularly distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Planar (high resolution l+ ~ 30) and Stereoscopic Particle Image Velocimetry (PIV) experiments in a wind tunnel. An adequate description of how to best characterise a rough wall, especially one where the density of roughness elements is sparse, is yet to be developed. In this study, rough surfaces consisting of regularly and uniformly distributed LEGO® blocks are used. Twelve different patterns are adopted in order to systematically examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area) and plan solidity (λp, plan area of roughness elements per unit wall-parallel area), on the turbulence structure. The Karman number, Reτ , is approximately 4000 across the different cases. Spanwise 3D vector fields at two different wall-normal locations (top of the canopy and within the log-region) are also compared to examine the spanwise homogeneity of the flow across different surfaces. In the talk, a detailed analysis of mean and rms velocity profiles, Reynolds stresses, and quadrant decomposition for the different patterns will be presented.
A mathematical model relating response durations to amount of subclinical resistant disease.
Gregory, W M; Richards, M A; Slevin, M L; Souhami, R L
1991-02-15
A mathematical model is presented which seeks to determine, from examination of the response durations of a group of patients with malignant disease, the mean and distribution of the resistant tumor volume. The mean tumor-doubling time and distribution of doubling times are also estimated. The model assumes that in a group of patients there is a log-normal distribution both of resistant disease and of tumor-doubling times and implies that the shapes of certain parts of an actuarial response-duration curve are related to these two factors. The model has been applied to data from two reported acute leukemia trials: (a) a recent acute myelogenous leukemia trial was examined. Close fits were obtained for both the first and second remission-duration curves. The model results suggested that patients with long first remissions had less resistant disease and had tumors with slower growth rates following second line treatment; (b) an historical study of maintenance therapy for acute lymphoblastic leukemia was used to estimate the mean cell-kill (approximately 10(4) cells) achieved with single agent, 6-mercaptopurine. Application of the model may have clinical relevance, for example, in identifying groups of patients likely to benefit from further intensification of treatment.
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
A Method for Approximating the Bivariate Normal Correlation Coefficient.
ERIC Educational Resources Information Center
Kirk, David B.
Improvements of the Gaussian quadrature in conjunction with the Newton-Raphson iteration technique (TM 000 789) are discussed as effective methods of calculating the bivariate normal correlation coefficient. (CK)
Assessing the Clinical Impact of Approximations in Analytical Dose Calculations for Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, Jan, E-mail: jschuemann@mgh.harvard.edu; Giantsoudi, Drosoula; Grassberger, Clemens
2015-08-01
Purpose: To assess the impact of approximations in current analytical dose calculation methods (ADCs) on tumor control probability (TCP) in proton therapy. Methods: Dose distributions planned with ADC were compared with delivered dose distributions as determined by Monte Carlo simulations. A total of 50 patients were investigated in this analysis with 10 patients per site for 5 treatment sites (head and neck, lung, breast, prostate, liver). Differences were evaluated using dosimetric indices based on a dose-volume histogram analysis, a γ-index analysis, and estimations of TCP. Results: We found that ADC overestimated the target doses on average by 1% to 2%more » for all patients considered. The mean dose, D95, D50, and D02 (the dose value covering 95%, 50% and 2% of the target volume, respectively) were predicted within 5% of the delivered dose. The γ-index passing rate for target volumes was above 96% for a 3%/3 mm criterion. Differences in TCP were up to 2%, 2.5%, 6%, 6.5%, and 11% for liver and breast, prostate, head and neck, and lung patients, respectively. Differences in normal tissue complication probabilities for bladder and anterior rectum of prostate patients were less than 3%. Conclusion: Our results indicate that current dose calculation algorithms lead to underdosage of the target by as much as 5%, resulting in differences in TCP of up to 11%. To ensure full target coverage, advanced dose calculation methods like Monte Carlo simulations may be necessary in proton therapy. Monte Carlo simulations may also be required to avoid biases resulting from systematic discrepancies in calculated dose distributions for clinical trials comparing proton therapy with conventional radiation therapy.« less
Luttinger theorem and imbalanced Fermi systems
NASA Astrophysics Data System (ADS)
Pieri, Pierbiagio; Strinati, Giancarlo Calvanese
2017-04-01
The proof of the Luttinger theorem, which was originally given for a normal Fermi liquid with equal spin populations formally described by the exact many-body theory at zero temperature, is here extended to an approximate theory given in terms of a "conserving" approximation also with spin imbalanced populations. The need for this extended proof, whose underlying assumptions are here spelled out in detail, stems from the recent interest in superfluid trapped Fermi atoms with attractive inter-particle interaction, for which the difference between two spin populations can be made large enough that superfluidity is destroyed and the system remains normal even at zero temperature. In this context, we will demonstrate the validity of the Luttinger theorem separately for the two spin populations for any "Φ-derivable" approximation, and illustrate it in particular for the self-consistent t-matrix approximation.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Somers, Jeffrey T.; Feiveson, Alan H.; Leigh, R. John; Wood, Scott J.; Paloski, William H.; Kornilova, Ludmila
2006-01-01
We studied the ability to hold the eyes in eccentric horizontal or vertical gaze angles in 68 normal humans, age range 19-56. Subjects attempted to sustain visual fixation of a briefly flashed target located 30 in the horizontal plane and 15 in the vertical plane in a dark environment. Conventionally, the ability to hold eccentric gaze is estimated by fitting centripetal eye drifts by exponential curves and calculating the time constant (t(sub c)) of these slow phases of gazeevoked nystagmus. Although the distribution of time-constant measurements (t(sub c)) in our normal subjects was extremely skewed due to occasional test runs that exhibited near-perfect stability (large t(sub c) values), we found that log10(tc) was approximately normally distributed within classes of target direction. Therefore, statistical estimation and inference on the effect of target direction was performed on values of z identical with log10t(sub c). Subjects showed considerable variation in their eyedrift performance over repeated trials; nonetheless, statistically significant differences emerged: values of tc were significantly higher for gaze elicited to targets in the horizontal plane than for the vertical plane (P less than 10(exp -5), suggesting eccentric gazeholding is more stable in the horizontal than in the vertical plane. Furthermore, centrifugal eye drifts were observed in 13.3, 16.0 and 55.6% of cases for horizontal, upgaze and downgaze tests, respectively. Fifth percentile values of the time constant were estimated to be 10.2 sec, 3.3 sec and 3.8 sec for horizontal, upward and downward gaze, respectively. The difference between horizontal and vertical gazeholding may be ascribed to separate components of the velocity position neural integrator for eye movements, and to differences in orbital mechanics. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied in a clinical setting to patients who were exposed to environments that may have modified their central integrators and thus require monitoring. Patients with gaze-evoked nystagmus can be flagged by comparing to the above established normative criteria.
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1977-01-01
Wind vector change with respect to time at Cape Kennedy, Florida, is examined according to the theory of multivariate normality. The joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from fifteen years of twice daily Rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, the joint distribution of wind component changes is bivariate normal, and the modulus of vector wind change is Rayleigh, has been tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from one to five hours, calculated from Jimsphere data, are presented.
Normal mode analysis on the relaxation of an excited nitromethane molecule in argon bath
NASA Astrophysics Data System (ADS)
Rivera-Rivera, Luis; Wagner, Albert
In our previous work [J. Chem. Phys. 142, 014303 (2015)] classical molecular dynamics simulations followed in an Ar bath the relaxation of nitromethane (CH3NO2) instantaneously excited by statistically distributing 50 kcal/mol among all its internal degrees of freedom. The 300 K Ar bath was at pressures of 10 to 400 atm, a range spanning the breakdown of the isolated binary collision approximation. Both rotational and vibrational energies exhibit multi-exponential decay. This study explores mode-specific mechanisms at work in the decay process. With the separation of rotation and vibration developed by Rhee and Kim [J. Chem. Phys. 107, 1394 (1997)], one can show that the vibrational kinetic energy decomposes only into vibrational normal modes while the rotational and Coriolis energies decompose into both vibrational and rotational normal modes. Then the saved CH3NO2 positions and momenta can be converted into mode-specific energies whose decay over 1000 ps can be monitored. The results identify vibrational and rotational modes that promote/resist energy lost and drive multi-exponential behavior. Increasing pressure can be shown to increasingly interfere with post-collision IVR. The work was supported by the U.S. Department of Energy, Office of Science, Chemical Sciences, Geosciences, and Biosciences Division.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldstein, E.A.; Cao, E.H.; Miller, M.E.
Extracts of peripheral lymphocytes from six individuals with chronic lymphocytic leukemia (CLL) were assayed for the ability to remove O/sup 6/-methylguanine (O/sup 6/MeGua) from exogenous DNA. The O/sup 6/MeGua-removing activity in CLL lymphocytes, predominantly B cells, was approximately 7-fold higher than in B lymphocytes of normal individuals and about 2-fold higher than in the unstimulated T type cells of normal persons. The activity measured in extracts of lymphocytes from three blood relatives was in the upper range of the normal distribution. Over 80% of the removal of O/sup 6/MeGua was accomplished by the transfer of the methyl group to cysteinemore » moieties of acceptor proteins in a stoichiometric reaction. If one assumes one acceptor group per acceptor protein, the calculated number of acceptor molecules per CLL lymphocyte falls between 91,000 and 220,000. Thus CLL lymphocytes do not show lower O/sup 6/MeGua-removing activity, in contrast to many tumor cell strains or transformed cell lines, which are reported to have a deficient methyl excision repair phenotype (Mer/sup -/). Instead, the CLL lymphocytes act as if they have a super-Mer/sup +/ phenotype.« less
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.
Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models
Burr, Tom
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668
Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.
Burr, Tom; Skurikhin, Alexei
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.
NASA Technical Reports Server (NTRS)
Divinskiy, M. L.; Kolchinskiy, I. G.
1974-01-01
The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.
NASA Astrophysics Data System (ADS)
Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua
2017-03-01
The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
Strategies for Optimal Control Design of Normal Acceleration Command Following on the F-16
1992-12-01
Padd approximation. This approximation has a pole at -40, and introduces a nonminimum phase zero at +40. In deriving the equation for normal acceleration...input signal. The mean not being exactly zero will surface in some simulation plots, but does not alter the point of showing general trends. Also...closer to reality, I will ’know that my goal has been accomplished. My honest belief is that general mixed H2/H.. optimization is the methodology of
Use of computed tomography and radiolabeled leukocytes in a cat with pancreatitis.
Head, Laurie L; Daniel, Gregory B; Becker, Timothy J; Lidbetter, David A
2005-01-01
The normal feline pancreas has been evaluated using radiolabeled leukocytes (99mTc-HMPAO) and computed tomography. The purpose of this report is to describe a clinical case where both modalities were utilized to assess the inflamed feline pancreas. A nine year old female cat presented with anorexia, depression and some vomiting. Blood values were unremarkable. Radiographs and ultrasound were suggestive of pancreatitis. The cat's leukocytes were separated and labeled according to an established protocol. Whole body images were acquired immediately, at 5 and 30 min, and at 1, 2, 4, and 17 hours post injection. Approximately 48 h later, the animal was anesthetized and computed tomography of the abdomen was preformed both pre and post contrast. Surgical biopsies were taken. The distribution of the WBCs was similar to that documented in normal animals, however, at 2 h there was faint uptake seen in the region of the pancreas. This uptake became more intense at 4 h and persisted at 17 h. Computed tomography showed irregular margination of the pancreas, it was larger than normal and inhomogeneous. Contrast enhancement was inhomogeneous and its peak enhancement was not reached until 10 min post injection; normal feline pancreas enhances homogeneously and peaks immediately. Histopathology confirmed pancreatitis with lymphocytic, plasmacytic, neutrophilic and eosinophilic inflammation and fibrosis. Radiolabeled leukocytes can be used to document pancreatic inflammation and this is best seen 4 h after injection. Computed tomography allows superior visualization of the pancreas. Both the appearance and contrast enhancement pattern of the inflamed pancreas differ from normal.
Status and distribution of mangrove forests of the world using earth observation satellite data
Giri, C.; Ochieng, E.; Tieszen, L.L.; Zhu, Z.; Singh, A.; Loveland, T.; Masek, J.; Duke, N.
2011-01-01
Aim Our scientific understanding of the extent and distribution of mangrove forests of the world is inadequate. The available global mangrove databases, compiled using disparate geospatial data sources and national statistics, need to be improved. Here, we mapped the status and distributions of global mangroves using recently available Global Land Survey (GLS) data and the Landsat archive.Methods We interpreted approximately 1000 Landsat scenes using hybrid supervised and unsupervised digital image classification techniques. Each image was normalized for variation in solar angle and earth-sun distance by converting the digital number values to the top-of-the-atmosphere reflectance. Ground truth data and existing maps and databases were used to select training samples and also for iterative labelling. Results were validated using existing GIS data and the published literature to map 'true mangroves'.Results The total area of mangroves in the year 2000 was 137,760 km2 in 118 countries and territories in the tropical and subtropical regions of the world. Approximately 75% of world's mangroves are found in just 15 countries, and only 6.9% are protected under the existing protected areas network (IUCN I-IV). Our study confirms earlier findings that the biogeographic distribution of mangroves is generally confined to the tropical and subtropical regions and the largest percentage of mangroves is found between 5?? N and 5?? S latitude.Main conclusions We report that the remaining area of mangrove forest in the world is less than previously thought. Our estimate is 12.3% smaller than the most recent estimate by the Food and Agriculture Organization (FAO) of the United Nations. We present the most comprehensive, globally consistent and highest resolution (30 m) global mangrove database ever created. We developed and used better mapping techniques and data sources and mapped mangroves with better spatial and thematic details than previous studies. ?? 2010 Blackwell Publishing Ltd.
Status and distribution of mangrove forests of the world using earth observation satellite data
Giri, Chandra; Ochieng, E.; Tieszen, Larry L.; Zhu, Zhi-Liang; Singh, Ashbindu; Loveland, Thomas R.; Masek, Jeffery G.; Duke, Norm
2011-01-01
Aim Our scientific understanding of the extent and distribution of mangrove forests of the world is inadequate. The available global mangrove databases, compiled using disparate geospatial data sources and national statistics, need to be improved. Here, we mapped the status and distributions of global mangroves using recently available Global Land Survey (GLS) data and the Landsat archive. Methods We interpreted approximately 1000 Landsat scenes using hybrid supervised and unsupervised digital image classification techniques. Each image was normalized for variation in solar angle and earth–sun distance by converting the digital number values to the top-of-the-atmosphere reflectance. Ground truth data and existing maps and databases were used to select training samples and also for iterative labelling. Results were validated using existing GIS data and the published literature to map ‘true mangroves’. Results The total area of mangroves in the year 2000 was 137,760 km2 in 118 countries and territories in the tropical and subtropical regions of the world. Approximately 75% of world's mangroves are found in just 15 countries, and only 6.9% are protected under the existing protected areas network (IUCN I-IV). Our study confirms earlier findings that the biogeographic distribution of mangroves is generally confined to the tropical and subtropical regions and the largest percentage of mangroves is found between 5° N and 5° S latitude. Main conclusions We report that the remaining area of mangrove forest in the world is less than previously thought. Our estimate is 12.3% smaller than the most recent estimate by the Food and Agriculture Organization (FAO) of the United Nations. We present the most comprehensive, globally consistent and highest resolution (30 m) global mangrove database ever created. We developed and used better mapping techniques and data sources and mapped mangroves with better spatial and thematic details than previous studies.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Link, W.A.; Barker, R.J.
2008-01-01
Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack?Jolly?Seber model and its extensions.
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1989-01-01
The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.
The 5 Alpha-Reductase Isozyme Family: A Review of Basic Biology and Their Role in Human Diseases
Azzouni, Faris; Godoy, Alejandro; Li, Yun; Mohler, James
2012-01-01
Despite the discovery of 5 alpha-reduction as an enzymatic step in steroid metabolism in 1951, and the discovery that dihydrotestosterone is more potent than testosterone in 1968, the significance of 5 alpha-reduced steroids in human diseases was not appreciated until the discovery of 5 alpha-reductase type 2 deficiency in 1974. Affected males are born with ambiguous external genitalia, despite normal internal genitalia. The prostate is hypoplastic, nonpalpable on rectal examination and approximately 1/10th the size of age-matched normal glands. Benign prostate hyperplasia or prostate cancer does not develop in these patients. At puberty, the external genitalia virilize partially, however, secondary sexual hair remains sparse and male pattern baldness and acne develop rarely. Several compounds have been developed to inhibit the 5 alpha-reductase isozymes and they play an important role in the prevention and treatment of many common diseases. This review describes the basic biochemical properties, functions, tissue distribution, chromosomal location, and clinical significance of the 5 alpha-reductase isozyme family. PMID:22235201
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
[Characteristics of fugitive dust emission from paved road near construction activities].
Tian, Gang; Fan, Shou-Bin; Li, Gang; Qin, Jian-Ping
2007-11-01
Because of the mud/dirt carryout from construction activities, the silt loading of paved road nearby is higher and the fugitive dust emission is stronger. By sampling and laboratory analysis of the road surface dust samples, we obtain the silt loading (mass of material equal to or less than 75 micromaters in physical diameter per unit area of travel surface) of paved roads near construction activities. The result show that silt loading of road near construction activities is higher than "normal road", and silt loading is negatively correlated with length from construction's door. According to AP-42 emission factor model of fugitive dust from roads, the emission factor of influenced road is 2 - 10 times bigger than "normal road", and the amount of fugitive dust emission influenced by one construction activity is "equivalent" to an additional road length of approximately 422 - 3 800 m with the baseline silt loading. Based on the spatial and temporal distribution of construction activities, in 2002 the amount of PM10 emission influenced by construction activities in Beijing city areas account of for 59% of fugitive dust from roads.
Körbahti, Bahadır K; Taşyürek, Selin
2015-03-01
Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.
NASA Astrophysics Data System (ADS)
Kostensalo, Joel; Suhonen, Jouni; Zuber, K.
2018-03-01
Charged-current (anti)neutrino-40Ar cross sections for astrophysical neutrinos have been calculated. The initial and final nuclear states were calculated using the nuclear shell model. The folded solar-neutrino scattering cross section was found to be 1.78 (23 ) ×10-42cm2 , which is higher than what the previous papers have reported. The contributions from the 1- and 2- multipoles were found to be significant at supernova-neutrino energies, confirming the random-phase approximation (RPA) result of a previous study. The effects of neutrino flavor conversions in dense stellar matter (matter oscillations) were found to enhance the neutrino-scattering cross sections significantly for both the normal and inverted mass hierarchies. For the antineutrino scattering, only a small difference between the nonoscillating and inverted-hierarchy cross sections was found, while the normal-hierarchy cross section was 2-3 times larger than that of the nonoscillating cross section, depending on the adopted parametrization of the Fermi-Dirac distribution. This property of the supernova-antineutrino signal could probably be used to distinguish between the two hierarchies in megaton LAr detectors.
Frequency distributions from birth, death, and creation processes.
Bartley, David L; Ogden, Trevor; Song, Ruiguang
2002-01-01
The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less
Heterotrophic plate count and consumer's health under special consideration of water softeners.
Hambsch, Beate; Sacré, Clara; Wagner, Ivo
2004-05-01
The phenomenon of bacterial growth in water softeners is well known since years. To upgrade the hygienic safety of water softeners, the German DIN Standard 19636 was developed, to assure that the distribution system could not be contaminated by these devices and that the drinking water to be used in the household still meets the microbiological standards according to the German drinking water guidelines, i.e. among others heterotrophic plate count (HPC) below 100 CFU/ml. Moreover, the standard for the water softeners includes a test for contamination with Pseudomonas aeruginosa which has to be disinfected during the regeneration phase. This is possible by sanitizing the resin bed during regeneration by producing chlorine. The results of the last 10 years of tests of water softeners according to DIN 19636 showed that it is possible to produce water softeners that comply with that standard. Approximately 60% of the tested models were accepted. P. aeruginosa is used as an indicator for potentially pathogenic bacteria being able to grow also in low nutrient conditions which normally prevail in drinking water. Like other heterotrophs, the numbers of P. aeruginosa increase rapidly as stagnation occurs. Normally P. aeruginosa is not present in the distributed drinking water. However, under certain conditions, P. aeruginosa can be introduced into the drinking water distribution system, for instance, during construction work. The occurrence of P. aeruginosa is shown in different cases in treatment plants, public drinking water systems and in-house installations. The compliance with DIN 19636 provides assurance that a water softener will not be a constant source of contamination, even if it is once inoculated with a potentially pathogenic bacterium like P. aeruginosa. Copyright 2003 Elsevier B.V.
Minimax rational approximation of the Fermi-Dirac distribution.
Moussa, Jonathan E
2016-10-28
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ -1 )) poles to achieve an error tolerance ϵ at temperature β -1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ , the occupied energy interval. This is particularly beneficial when Δ ≫ Δ occ , such as in electronic structure calculations that use a large basis set.
Minimax rational approximation of the Fermi-Dirac distribution
NASA Astrophysics Data System (ADS)
Moussa, Jonathan E.
2016-10-01
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.
Time-independent models of asset returns revisited
NASA Astrophysics Data System (ADS)
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
Assessment of the hygienic performances of hamburger patty production processes.
Gill, C O; Rahn, K; Sloan, K; McMullen, L M
1997-05-20
The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.
The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.
Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels
2013-07-21
Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.
Survival time of the susceptible-infected-susceptible infection process on a graph.
van de Bovenkamp, Ruud; Van Mieghem, Piet
2015-09-01
The survival time T is the longest time that a virus, a meme, or a failure can propagate in a network. Using the hitting time of the absorbing state in an uniformized embedded Markov chain of the continuous-time susceptible-infected-susceptible (SIS) Markov process, we derive an exact expression for the average survival time E[T] of a virus in the complete graph K_{N} and the star graph K_{1,N-1}. By using the survival time, instead of the average fraction of infected nodes, we propose a new method to approximate the SIS epidemic threshold τ_{c} that, at least for K_{N} and K_{1,N-1}, correctly scales with the number of nodes N and that is superior to the epidemic threshold τ_{c}^{(1)}=1/λ_{1} of the N-intertwined mean-field approximation, where λ_{1} is the spectral radius of the adjacency matrix of the graph G. Although this new approximation of the epidemic threshold offers a more intuitive understanding of the SIS process, it remains difficult to compare outbreaks in different graph types. For example, the survival in an arbitrary graph seems upper bounded by the complete graph and lower bounded by the star graph as a function of the normalized effective infection rate τ/τ_{c}^{(1)}. However, when the average fraction of infected nodes is used as a basis for comparison, the virus will survive in the star graph longer than in any other graph, making the star graph the worst-case graph instead of the complete graph. Finally, in non-Markovian SIS, the distribution of the spreading attempts over the infectious period of a node influences the survival time, even if the expected number of spreading attempts during an infectious period (the non-Markovian equivalent of the effective infection rate) is kept constant. Both early and late infection attempts lead to shorter survival times. Interestingly, just as in Markovian SIS, the survival times appear to be exponentially distributed, regardless of the infection and curing time distributions.
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Synthetic Jets in Cross-flow. Part 1; Round Jet
NASA Technical Reports Server (NTRS)
Zaman, K. B. M. Q.; Milanovic, Ivana M.
2003-01-01
Results of an experimental investigation on synthetic jets from round orifices with and without cross-flow are presented. Jet Reynolds number up to 46,000 with a fully turbulent approach boundary layer, and Stokes number up to 400. are covered. The threshold of stroke length for synthetic jet formation. in the absence of the cross-flow, is found to be Lo /D approximately 0.5. Above Lo /D is approximately 10, the profiles of normalized centerline mean velocity appear to become invariant. It is reasoned that the latter threshold may be related to the phenomenon of saturation of impulsively generated vortices. In the presence of the cross-flow, the penetration height of a synthetic jet is found to depend on the momentum- flux ratio . When this ratio is defined in terms of the maximum jet velocity and the cross-flow velocity. not only all data collapse but also the jet trajectory is predicted well by correlation equation available for steady jets-in-cross-flow. Distributions of mean velocity, streamwise vorticity as well as turbulence intensity for a synthetic jet in cross-flow are found to be similar to those of a steady jet-in-cross-flow. A pair of counter-rotating streamwise vortices, corresponding to the bound vortex pair of the steady case, is clearly observed. Mean velocity distribution exhibits a dome of low momentum fluid pulled up from the boundary layer, and the entire domain is characterized by high turbulence.
In Vivo Two-Photon Fluorescence Kinetics of Primate Rods and Cones
Sharma, Robin; Schwarz, Christina; Williams, David R.; Palczewska, Grazyna; Palczewski, Krzysztof; Hunter, Jennifer J.
2016-01-01
Purpose The retinoid cycle maintains vision by regenerating bleached visual pigment through metabolic events, the kinetics of which have been difficult to characterize in vivo. Two-photon fluorescence excitation has been used previously to track autofluorescence directly from retinoids and pyridines in the visual cycle in mouse and frog retinas, but the mechanisms of the retinoid cycle are not well understood in primates. Methods We developed a two-photon fluorescence adaptive optics scanning light ophthalmoscope dedicated to in vivo imaging in anesthetized macaques. Using pulsed light at 730 nm, two-photon fluorescence was captured from rods and cones during light and dark adaptation through the eye's pupil. Results The fluorescence from rods and cones increased with light exposure but at different rates. During dark adaptation, autofluorescence declined, with cone autofluorescence decreasing approximately 4 times faster than from rods. Rates of autofluorescence decrease in rods and cones were approximately 4 times faster than their respective rates of photopigment regeneration. Also, subsets of sparsely distributed cones were less fluorescent than their neighbors immediately following bleach at 565 nm and they were comparable with the S cone mosaic in density and distribution. Conclusions Although other molecules could be contributing, we posit that these fluorescence changes are mediated by products of the retinoid cycle. In vivo two-photon ophthalmoscopy provides a way to monitor noninvasively stages of the retinoid cycle that were previously inaccessible in the living primate eye. This can be used to assess objectively photoreceptor function in normal and diseased retinas. PMID:26903225
Xu, Jianhua; Morris, Lynsie; Fliesler, Steven J.; Sherry, David M.
2011-01-01
Purpose. To investigate the progression of cone dysfunction and degeneration in CNG channel subunit CNGB3 deficiency. Methods. Retinal structure and function in CNGB3−/− and wild-type (WT) mice were evaluated by electroretinography (ERG), lectin cytochemistry, and correlative Western blot analysis of cone-specific proteins. Cone and rod terminal integrity was assessed by electron microscopy and synaptic protein immunohistochemical distribution. Results. Cone ERG amplitudes (photopic b-wave) in CNGB3−/− mice were reduced to approximately 50% of WT levels by postnatal day 15, decreasing further to approximately 30% of WT levels by 1 month and to approximately 20% by 12 months of age. Rod ERG responses (scotopic a-wave) were not affected in CNGB3−/− mice. Average CNGB3−/− cone densities were approximately 80% of WT levels at 1 month and declined slowly thereafter to only approximately 50% of WT levels by 12 months. Expression levels of M-opsin, cone transducin α-subunit, and cone arrestin in CNGB3−/− mice were reduced by 50% to 60% by 1 month and declined to 35% to 45% of WT levels by 9 months. In addition, cone opsin mislocalized to the outer nuclear layer and the outer plexiform layer in the CNGB3−/− retina. Cone and rod synaptic marker expression and terminal ultrastructure were normal in the CNGB3−/− retina. Conclusions. These findings are consistent with an early-onset, slow progression of cone functional defects and cone loss in CNGB3−/− mice, with the cone signaling deficits arising from disrupted phototransduction and cone loss rather than from synaptic defects. PMID:21273547
A general approach to double-moment normalization of drop size distributions
NASA Astrophysics Data System (ADS)
Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.
2003-04-01
Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
[Performance evaluation of CT automatic exposure control on fast dual spiral scan].
Niwa, Shinji; Hara, Takanori; Kato, Hideki; Wada, Yoichi
2014-11-01
The performance of individual computed tomography automatic exposure control (CT-AEC) is very important for radiation dose reduction and image quality equalization in CT examinations. The purpose of this study was to evaluate the performance of CT-AEC in conventional pitch mode (Normal spiral) and fast dual spiral scan (Flash spiral) in a 128-slice dual-source CT scanner. To evaluate the response properties of CT-AEC in the 128-slice DSCT scanner, a chest phantom was placed on the patient table and was fixed at the center of the field of view (FOV). The phantom scan was performed using Normal spiral and Flash spiral scanning. We measured the effective tube current time product (Eff. mAs) of simulated organs in the chest phantom along the longitudinal (z) direction, and the dose dependence (distribution) of in-plane locations for the respective scan modes was also evaluated by using a 100-mm-long pencil-type ionization chamber. The dose length product (DLP) was evaluated using the value displayed on the console after scanning. It was revealed that the response properties of CT-AEC in Normal spiral scanning depend on the respective pitches and Flash spiral scanning is independent of the respective pitches. In-plane radiation dose of Flash spiral was lower than that of Normal spiral. The DLP values showed a difference of approximately 1.7 times at the maximum. The results of our experiments provide information for adjustments for appropriate scanning parameters using CT-AEC in a 128-slice DSCT scanner.
Phosphate metabolite concentrations and ATP hydrolysis potential in normal and ischaemic hearts
Wu, Fan; Zhang, Eric Y; Zhang, Jianyi; Bache, Robert J; Beard, Daniel A
2008-01-01
To understand how cardiac ATP and CrP remain stable with changes in work rate – a phenomenon that has eluded mechanistic explanation for decades – data from 31phosphate-magnetic resonance spectroscopy (31P-MRS) are analysed to estimate cytoplasmic and mitochondrial phosphate metabolite concentrations in the normal state, during high cardiac workstates, during acute ischaemia and reactive hyperaemic recovery. Analysis is based on simulating distributed heterogeneous oxygen transport in the myocardium integrated with a detailed model of cardiac energy metabolism. The model predicts that baseline myocardial free inorganic phosphate (Pi) concentration in the canine myocyte cytoplasm – a variable not accessible to direct non-invasive measurement – is approximately 0.29 mm and increases to 2.3 mm near maximal cardiac oxygen consumption. During acute ischaemia (from ligation of the left anterior descending artery) Pi increases to approximately 3.1 mm and ATP consumption in the ischaemic tissue is reduced quickly to less than half its baseline value before the creatine phosphate (CrP) pool is 18% depleted. It is determined from these experiments that the maximal rate of oxygen consumption of the heart is an emergent property and is limited not simply by the maximal rate of ATP synthesis, but by the maximal rate at which ATP can be synthesized at a potential at which it can be utilized. The critical free energy of ATP hydrolysis for cardiac contraction that is consistent with these findings is approximately −63.5 kJ mol−1. Based on theoretical findings, we hypothesize that inorganic phosphate is both the primary feedback signal for stimulating oxidative phosphorylation in vivo and also the most significant product of ATP hydrolysis in limiting the capacity of the heart to hydrolyse ATP in vivo. Due to the lack of precise quantification of Piin vivo, these hypotheses and associated model predictions remain to be carefully tested experimentally. PMID:18617566
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2016-12-01
It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
NASA Technical Reports Server (NTRS)
Mariani, F.; Berdichevsky, D.; Szabo, A.; Lepping, R. P.; Vinas, A. F.
1999-01-01
A list of the interplanetary (IP) shocks observed by WIND from its launch (in November 1994) to May 1997 is presented. Forty two shocks were identified. The magnetohydrodynamic nature of the shocks is investigated, and the associated shock parameters and their uncertainties are accurately computed using a practical scheme which combines two techniques. These techniques are a combination of the "pre-averaged" magnetic-coplanarity, velocity-coplanarity, and the Abraham-Schrauner-mixed methods, on the one hand, and the Vinas and Scudder [1986] technique for solving the non-linear least-squares Rankine-Hugoniot shock equations, on the other. Within acceptable limits these two techniques generally gave the same results, with some exceptions. The reasons for the exceptions are discussed. It is found that the mean strength and rate of occurrence of the shocks appears to correlated with the solar cycle. Both showed a decrease in 1996 coincident with the time of the lowest ultraviolet solar radiance, indicative of solar minimum and start of solar cycle 23, which began around June 1996. Eighteen shocks appeared to be associated with corotating interaction regions (CIRs). The distribution of their shock normals showed a mean direction peaking in the ecliptic plane and with a longitude (phi(sub n)) in that plane between perpendicular to the Parker spiral and radial from the Sun. When grouped according to the sense of the direction of propagation of the shocks the mean azimuthal (longitude) angle in GSE coordinates was approximately 194 deg for the fast-forward and approximately 20 deg for the fast-reverse shocks. Another 16 shocks were determined to be driven by solar transients, including magnetic clouds. These shocks had a broader distribution of normal directions than those of the CIR cases with a mean direction close to the Sun-Earth line. Eight shocks of unknown origin had normal orientation well off the ecliptic plane. No shock propagated with longitude phi(sub n) >= 220 +/- 10 deg, this would suggest strong hindrance to the propagation of shocks contra a rather tightly winding Parker spiral. Examination of the obliquity angle theta(sub Bn) (that between the shock normal and the upstream interplanetary magnetic field) for the full set of shocks revealed that about 58% was quasi-perpendicular, and some were very nearly perpendicular. About 32% of the shocks were oblique, and the rest (only 10%) were quasi-parallel, with one on Dec. 9, 1996 that showed field pulsations. Small uncertainty in the estimated angle theta(sub Bn) was obtained for about 10 shocks with magnetosonic Mach numbers between 1 and 2, hopefully significantly contributing to studies researching particle acceleration mechanisms at IP shocks, and to investigations where accurate values of theta(sub Bn) are crucial.
NASA Astrophysics Data System (ADS)
George, Rohini
Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution function could be approximated to a normal distribution function. A statistical analysis was also performed to investigate if a patient's physical, tumor or general characteristics played a role in identifying whether he/she responded positively to the coaching type---signified by a reduction in the variability of respiratory motion. The analysis demonstrated that, although there were some characteristics like disease type and dose per fraction that were significant with respect to time-independent analysis, there were no significant time trends observed for the inter-session or intra-session analysis. Based on patient feedback with the existing audio-visual biofeedback system used for the study and research performed on other feedback systems, an improved audio-visual biofeedback system was designed. It is hoped the widespread clinical implementation of audio-visual biofeedback for radiotherapy will improve the accuracy of lung cancer radiotherapy.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
High-Frequency Normal Mode Propagation in Aluminum Cylinders
Lee, Myung W.; Waite, William F.
2009-01-01
Acoustic measurements made using compressional-wave (P-wave) and shear-wave (S-wave) transducers in aluminum cylinders reveal waveform features with high amplitudes and with velocities that depend on the feature's dominant frequency. In a given waveform, high-frequency features generally arrive earlier than low-frequency features, typical for normal mode propagation. To analyze these waveforms, the elastic equation is solved in a cylindrical coordinate system for the high-frequency case in which the acoustic wavelength is small compared to the cylinder geometry, and the surrounding medium is air. Dispersive P- and S-wave normal mode propagations are predicted to exist, but owing to complex interference patterns inside a cylinder, the phase and group velocities are not smooth functions of frequency. To assess the normal mode group velocities and relative amplitudes, approximate dispersion relations are derived using Bessel functions. The utility of the normal mode theory and approximations from a theoretical and experimental standpoint are demonstrated by showing how the sequence of P- and S-wave normal mode arrivals can vary between samples of different size, and how fundamental normal modes can be mistaken for the faster, but significantly smaller amplitude, P- and S-body waves from which P- and S-wave speeds are calculated.
The Local Supercluster as a test of cosmological models
NASA Technical Reports Server (NTRS)
Cen, Renyue
1994-01-01
The Local Supercluster kinematic properties (the Local Group infall toward the Virgo Cluster and galaxy density distribution about the Virgo Cluster) in various cosmological models are examined utilizing large-scale N-body (PM) simulations 500(exp 3) cells, 250(exp 3) particles, and box size of 400 h(exp -1) Mpc) and are compared to observations. Five models are investigated: (1) the standard, Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with omega = 1, h = 0.5, and sigma(sub 8) = 1.05; (2) the standard Hot Dark Matter (HDM) model with omega = 1, h = 0.75, and sigma(sub 8) = 1; (3) the tilted CDM model with omega = 1, h = 0.5, n = 0.7, and sigma(sub 8) = 0.5; (4) a CDM + lambda model with omega = 0.3, lambda = 0.7, h = 2/3, and sigma(sub 8) = 2/3; (5) the PBI model with omega = 0.2, h = 0.8, x = 0.1, m = -0.5, and sigma(sub 8) = 0.9. Comparison of the five models with the presently available observational measurements v(sub LG) = 85 - 305 km/s (with mean of 250 km/s), delta(n(sub g))/(n(sub g)-bar) = 1.40 + or - 0.35) suggests that an open universe with omega approximately 0.5 (with or without lambda) and sigma(sub 8) approximately 0.8 is preferred, with omega = 0.3-1.0 (with or without lambda) and sigma(sub 8) = 0.7-1.0 being the acceptable range. At variance with some previous claims based on either direct N-body or spherical nonlinear approaches, we find that a flat model with sigma(sub 8) approximately 0.7-1.0 seems to be reasonably consistent with observations. However, if one favors the low limit of v(sub LG) = 85 km/s, then an omega approximately 0.2-0.3 universe seems to provide a better fit, and flat (omega = 1) models are ruled out at approximately 95% confidence level. On the other hand, if the high limit of v(sub LG) = 350 km/s is closer to the truth, then it appears that omega approximately 0.7-0.8 is more consistent. This test is insensitive to the shape of the power spectrum, but rather sensitive to the normalization of the perturbation amplitude on the relevant scale (e.g., sigma(sub 8)) and omega. We find that neither linear nor nonlinear relations (with spherical symmetry) are good approximations for the relation between radial peculiar velocity and density perturbation, i.e., nonspherical effects and gravitational tidal field are important. The derived omega using either of the two relations is underestimated. In some cases, this error is as large as a factor of 2-4.
2016-10-01
the nodule. The discriminability of benign and malignant nodules were analyzed using t- test and the normal distribution of the individual metric value...22 Surround Distribution Distribution of the 7 parenchymal exemplars (Normal, Honey comb, Reticular, Ground glass, mild low attenuation area...the distribution of honey comb, reticular and ground glass surrounding the nodule. 0.001
29 CFR 4044.73 - Lump sums and other alternative forms of distribution in lieu of annuities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... distribution is the present value of the normal form of benefit provided by the plan payable at normal... 29 Labor 9 2010-07-01 2010-07-01 false Lump sums and other alternative forms of distribution in... Benefits and Assets Non-Trusteed Plans § 4044.73 Lump sums and other alternative forms of distribution in...
Detection and Parameter Estimation of Chirped Radar Signals.
2000-01-10
Wigner - Ville distribution ( WVD ): The WVD belongs to the Cohen’s class of energy distributions ...length. 28 6. Pseudo Wigner - Ville distribution (PWVD): The PWVD introduces a time-window to the WVD definition, thereby reducing the interferences...Frequency normalized to sampling frequency 26 Figure V.2: Wigner - Ville distribution ; time normalized to the pulse length 28 Figure V.3:
The structure of high-temperature solar flare plasma in non-thermal flare models
NASA Technical Reports Server (NTRS)
Emslie, A. G.
1985-01-01
Analytic differential emission measure distributions have been derived for coronal plasma in flare loops heated both by collisions of high-energy suprathermal electrons with background plasma, and by ohmic heating by the beam-normalizing return current. For low densities, reverse current heating predominates, while for higher densities collisional heating predominates. There is thus a minimum peak temperature in an electron-heated loop. In contrast to previous approximate analyses, it is found that a stable reverse current can dominate the heating rate in a flare loop, especially in the low corona. Two 'scaling laws' are found which relate the peak temperature in the loop to the suprathermal electron flux. These laws are testable observationally and constitute a new diagnostic procedure for examining modes of energy transport in flaring loops.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Exploring the statistics of magnetic reconnection X-points in kinetic particle-in-cell turbulence
NASA Astrophysics Data System (ADS)
Haggerty, C. C.; Parashar, T. N.; Matthaeus, W. H.; Shay, M. A.; Yang, Y.; Wan, M.; Wu, P.; Servidio, S.
2017-10-01
Magnetic reconnection is a ubiquitous phenomenon in turbulent plasmas. It is an important part of the turbulent dynamics and heating of space and astrophysical plasmas. We examine the statistics of magnetic reconnection using a quantitative local analysis of the magnetic vector potential, previously used in magnetohydrodynamics simulations, and now employed to fully kinetic particle-in-cell (PIC) simulations. Different ways of reducing the particle noise for analysis purposes, including multiple smoothing techniques, are explored. We find that a Fourier filter applied at the Debye scale is an optimal choice for analyzing PIC data. Finally, we find a broader distribution of normalized reconnection rates compared to the MHD limit with rates as large as 0.5 but with an average of approximately 0.1.
NASA Astrophysics Data System (ADS)
Vargas, William E.; Amador, Alvaro; Niklasson, Gunnar A.
2006-05-01
Diffuse reflectance spectra of paint coatings with different pigment concentrations, normally illuminated with unpolarized radiation, have been measured. A four-flux radiative transfer approach is used to model the diffuse reflectance of TiO2 (rutile) pigmented coatings through the solar spectral range. The spectral dependence of the average pathlength parameter and of the forward scattering ratio for diffuse radiation, are explicitly incorporated into this four-flux model from two novel approximations. The size distribution of the pigments has been taken into account to obtain the averages of the four-flux parameters: scattering and absorption cross sections, forward scattering ratios for collimated and isotropic diffuse radiation, and coefficients involved in the expansion of the single particle phase function in terms of Legendre polynomials.
Is isotropic turbulent diffusion symmetry restoring?
NASA Astrophysics Data System (ADS)
Effinger, H.; Grossmann, S.
1984-07-01
The broadening of a cloud of marked particle pairs in longitudinal and transverse directions relative to the initial separation in fully developed isotropic turbulent flow is evaluated on the basis of the unified theory of turbulent relative diffusion of Grossmann and Procaccia (1984). The closure assumption of the theory is refined; its validity is confirmed by comparing experimental data; approximate analytical expressions for the traces of variance and asymmetry in the inertial subrange are obtained; and intermittency is treated using a log-normal model. The difference between the longitudinal and transverse components of the variance tensor is shown to tend to a finite nonzero limit dependent on the radial distribution of the cloud. The need for further measurements and the implications for studies of particle waste in air or water are indicated.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
Market reaction to a bid-ask spread change: A power-law relaxation dynamics
NASA Astrophysics Data System (ADS)
Ponzi, Adam; Lillo, Fabrizio; Mantegna, Rosario N.
2009-07-01
We study the relaxation dynamics of the bid-ask spread and of the midprice after a sudden variation of the spread in a double auction financial market. We find that the spread decays as a power law to its normal value. We measure the price reversion dynamics and the permanent impact, i.e., the long-time effect on price, of a generic event altering the spread and we find an approximately linear relation between immediate and permanent impact. We hypothesize that the power-law decay of the spread is a consequence of the strategic limit order placement of liquidity providers. We support this hypothesis by investigating several quantities, such as order placement rates and distribution of prices and times of submitted orders, which affect the decay of the spread.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
Suppression of Rabbit VX‐2 Subcutaneous Tumor Growth by Gadolinium Neutron Capture Therapy
Tokita, Nobuhiko; Tokuuye, Koichi; Satoh, Michinao; Churei, Hisahiko; Pechoux, Cécile Le; Kobayashi, Tooru; Kanda, Keiji
1993-01-01
VX‐2 tumors growing in hind legs of New Zealand White rabbits (n=4) were exposed to thermal neutrons for 40 min (2.1 × 1012 neutrons cm−2) while one of two hind leg tumors of each rabbit was infused continuously with meglumine gadopentetate through a branch of the left femoral artery. The contralateral (uninfused) tumors served as controls. Although no differential distribution of gadolinium was achieved between the tumor and its adjacent normal tissue, the gadolinium concentration in the infused tumor was approximately 5–6 fold higher than that in the contralateral tumor. Growth of gadolinium‐infused tumors was significantly inhibited compared to that of control tumors (P<0.05) between the 16th and 23rd days after treatment. PMID:8407547
NASA Astrophysics Data System (ADS)
Bronstein, Leo; Koeppl, Heinz
2018-01-01
Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Human papillomavirus genotype distribution in Madrid and correlation with cytological data
2011-01-01
Background Cervical cancer is the second most common cancer in women worldwide. Infection with certain human papillomavirus (HPV) genotypes is the most important risk factor associated with cervical cancer. This study analysed the distribution of type-specific HPV infection among women with normal and abnormal cytology, to assess the potential benefit of prophylaxis with anti-HPV vaccines. Methods Cervical samples of 2,461 women (median age 34 years; range 15-75) from the centre of Spain were tested for HPV DNA. These included 1,656 samples with normal cytology (NC), 336 with atypical squamous cells of undetermined significance (ASCUS), 387 low-grade squamous intraepithelial lesions (LSILs), and 82 high-grade squamous intraepithelial lesions (HSILs). HPV detection and genotyping were performed by PCR using 5'-biotinylated MY09/11 consensus primers, and reverse dot blot hybridisation. Results HPV infection was detected in 1,062 women (43.2%). Out of these, 334 (31%) samples had normal cytology and 728 (69%) showed some cytological abnormality: 284 (27%) ASCUS, 365 (34%) LSILs, and 79 (8%) HSILs. The most common genotype found was HPV 16 (28%) with the following distribution: 21% in NC samples, 31% in ASCUS, 26% in LSILs, and 51% in HSILs. HPV 53 was the second most frequent (16%): 16% in NC, 16% in ASCUS, 19% in LSILs, and 5% in HSILs. The third genotype was HPV 31 (12%): 10% in NC, 11% in ASCUS, 14% in LSILs, and 11% in HSILs. Co-infections were found in 366 samples (34%). In 25%, 36%, 45% and 20% of samples with NC, ASCUS, LSIL and HSIL, respectively, more than one genotype was found. Conclusions HPV 16 was the most frequent genotype in our area, followed by HPV 53 and 31, with a low prevalence of HPV 18 even in HSILs. The frequency of genotypes 16, 52 and 58 increased significantly from ASCUS to HSILs. Although a vaccine against HPV 16 and 18 could theoretically prevent approximately 50% of HSILs, genotypes not covered by the vaccine are frequent in our population. Knowledge of the epidemiological distribution is necessary to predict the effect of vaccines on incidence of infection and evaluate cross-protection from current vaccines against infection with other types. PMID:22081930
Using Fractal And Morphological Criteria For Automatic Classification Of Lung Diseases
NASA Astrophysics Data System (ADS)
Vehel, Jacques Levy
1989-11-01
Medical Images are difficult to analyze by means of classical image processing tools because they are very complex and irregular. Such shapes are obtained for instance in Nuclear Medecine with the spatial distribution of activity for organs such as lungs, liver, and heart. We have tried to apply two different theories to these signals: - Fractal Geometry deals with the analysis of complex irregular shapes which cannot well be described by the classical Euclidean geometry. - Integral Geometry treats sets globally and allows to introduce robust measures. We have computed three parameters on three kinds of Lung's SPECT images: normal, pulmonary embolism and chronic desease: - The commonly used fractal dimension (FD), that gives a measurement of the irregularity of the 3D shape. - The generalized lacunarity dimension (GLD), defined as the variance of the ratio of the local activity by the mean activity, which is only sensitive to the distribution and the size of gaps in the surface. - The Favard length that gives an approximation of the surface of a 3-D shape. The results show that each slice of the lung, considered as a 3D surface, is fractal and that the fractal dimension is the same for each slice and for the three kind of lungs; as for the lacunarity and Favard length, they are clearly different for normal lungs, pulmonary embolisms and chronic diseases. These results indicate that automatic classification of Lung's SPECT can be achieved, and that a quantitative measurement of the evolution of the disease could be made.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Spin State Equilibria of Asteroids due to YORP Effects
NASA Astrophysics Data System (ADS)
Golubov, Oleksiy; Scheeres, Daniel J.; Lipatova, Veronika
2016-05-01
Spins of small asteroids are controlled by the Yarkovsky--O'Keefe--Radzievskii--Paddack (YORP) effect. The normal version of this effect has two components: the axial component alters the rotation rate, while the obliquity component alters the obliquity. Under this model the rotation state of an asteroid can be described in a phase plane with the rotation rate along the polar radius and the obliquity as the polar angle. The YORP effect induces a phase flow in this plane, which determines the distribution of asteroid rotation rates and obliquities.We study the properties of this phase flow for several typical cases. Some phase flows have stable attractors, while in others all trajectories go to very small or large rotation rates. In the simplest case of zero thermal inertia approximate analytical solutions to dynamics equations are possible. Including thermal inertia and the Tangential YORP effect makes the possible evolutionary scenarios much more diverse. We study possible evolution paths and classify the most general trends. Also we discuss possible implications for the distribution of asteroid rotation rates and obliquities.A special emphasis is put on asteroid (25143) Itokawa, whose shape model is well determined, but who's measured YORP acceleration does not agree with the predictions of normal YORP. We show that Itokawa's rotational state can be explained by the presence of tangential YORP and that it may be in or close to a stable spin state equilibrium. The implications of such states will be discussed.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
ERIC Educational Resources Information Center
Ho, Andrew D.; Yu, Carol C.
2015-01-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Minimax rational approximation of the Fermi-Dirac distribution
Moussa, Jonathan E.
2016-10-27
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ –1)) poles to achieve an error tolerance ϵ at temperature β –1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δ occ, such as in electronic structure calculations that use a large basis set.
Mean and Fluctuating Force Distribution in a Random Array of Spheres
NASA Astrophysics Data System (ADS)
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-11-01
This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.