Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Empirical analysis on the runners' velocity distribution in city marathons
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Meng, Fan
2018-01-01
In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.
On the cause of the non-Gaussian distribution of residuals in geomagnetism
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2017-12-01
To describe errors in the data, Gaussian distributions naturally come to mind. In many practical instances, indeed, Gaussian distributions are appropriate. In the broad field of geomagnetism, however, it has repeatedly been noted that residuals between data and models often display much sharper distributions, sometimes better described by a Laplace distribution. In the present study, we make the case that such non-Gaussian behaviors are very likely the result of what is known as mixture of distributions in the statistical literature. Mixtures arise as soon as the data do not follow a common distribution or are not properly normalized, the resulting global distribution being a mix of the various distributions followed by subsets of the data, or even individual datum. We provide examples of the way such mixtures can lead to distributions that are much sharper than Gaussian distributions and discuss the reasons why such mixtures are likely the cause of the non-Gaussian distributions observed in geomagnetism. We also show that when properly selecting sub-datasets based on geophysical criteria, statistical mixture can sometimes be avoided and much more Gaussian behaviors recovered. We conclude with some general recommendations and point out that although statistical mixture always tends to sharpen the resulting distribution, it does not necessarily lead to a Laplacian distribution. This needs to be taken into account when dealing with such non-Gaussian distributions.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco
2014-01-01
Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.
On the application of Rice's exceedance statistics to atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Chen, W. Y.
1972-01-01
Discrepancies produced by the application of Rice's exceedance statistics to atmospheric turbulence are examined. First- and second-order densities from several data sources have been measured for this purpose. Particular care was paid to each selection of turbulence that provides stationary mean and variance over the entire segment. Results show that even for a stationary segment of turbulence, the process is still highly non-Gaussian, in spite of a Gaussian appearance for its first-order distribution. Data also indicate strongly non-Gaussian second-order distributions. It is therefore concluded that even stationary atmospheric turbulence with a normal first-order distribution cannot be considered a Gaussian process, and consequently the application of Rice's exceedance statistics should be approached with caution.
van Albada, S J; Robinson, P A
2007-04-15
Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.
Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej
2013-08-01
Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.
da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L
2014-01-01
In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.
Computations of Eisenstein series on Fuchsian groups
NASA Astrophysics Data System (ADS)
Avelin, Helen
2008-09-01
We present numerical investigations of the value distribution and distribution of Fourier coefficients of the Eisenstein series E(z;s) on arithmetic and non-arithmetic Fuchsian groups. Our numerics indicate a Gaussian limit value distribution for a real-valued rotation of E(z;s) as operatorname{Re} sD1/2 , operatorname{Im} sto infty and also, on non-arithmetic groups, a complex Gaussian limit distribution for E(z;s) when operatorname{Re} s>1/2 near 1/2 and operatorname{Im} sto infty , at least if we allow operatorname{Re} sto 1/2 at some rate. Furthermore, on non-arithmetic groups and for fixed s with operatorname{Re} s ge 1/2 near 1/2 , our numerics indicate a Gaussian limit distribution for the appropriately normalized Fourier coefficients.
Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong
2016-12-09
Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2 = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
NASA Astrophysics Data System (ADS)
Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin
2018-01-01
The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shortis, M.; Johnston, G.
1997-11-01
In a previous paper, the results of photogrammetric measurements of a number of paraboloidal reflecting surfaces were presented. These results showed that photogrammetry can provide three-dimensional surface characterizations of such solar concentrators. The present paper describes the assessment of the quality of these surfaces as a derivation of the photogrammetrically produced surface coordinates. Statistical analysis of the z-coordinate distribution of errors indicates that these generally conform to a univariate Gaussian distribution, while the numerical assessment of the surface normal vectors on these surfaces indicates that the surface normal deviations appear to follow an approximately bivariate Gaussian distribution. Ray tracing ofmore » the measured surfaces to predict the expected flux distribution at the focal point of the 400 m{sup 2} dish show a close correlation with the videographically measured flux distribution at the focal point of the dish.« less
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
Log-amplitude statistics for Beck-Cohen superstatistics
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Konno, Hidetoshi
2013-05-01
As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.
Wedemeyer, Gary A.; Nelson, Nancy C.
1975-01-01
Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.
NASA Astrophysics Data System (ADS)
Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.
2016-07-01
Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.
Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procacci, Piero
2015-04-21
In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less
An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-01-01
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197
An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-08-21
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.
powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks
NASA Astrophysics Data System (ADS)
Murray, Steven G.
2018-05-01
powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.
Gaussian fluctuation of the diffusion exponent of virus capsid in a living cell nucleus
NASA Astrophysics Data System (ADS)
Itto, Yuichi
2018-05-01
In their work [4], Bosse et al. experimentally showed that virus capsid exhibits not only normal diffusion but also anomalous diffusion in nucleus of a living cell. There, it was found that the distribution of fluctuations of the diffusion exponent characterizing them takes the Gaussian form, which is, quite remarkably, the same form for two different types of the virus. This suggests high robustness of such fluctuations. Here, the statistical property of local fluctuations of the diffusion exponent of the virus capsid in the nucleus is studied. A maximum-entropy-principle approach (originally proposed for a different virus in a different cell) is applied for obtaining the fluctuation distribution of the exponent. Largeness of the number of blocks identified with local areas of interchromatin corrals is also examined based on the experimental data. It is shown that the Gaussian distribution of the local fluctuations can be derived, in accordance with the above form. In addition, it is quantified how the fluctuation distribution on a long time scale is different from the Gaussian distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
Stable Lévy motion with inverse Gaussian subordinator
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Gajda, J.
2017-09-01
In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
Kollins, Scott H; McClernon, F Joseph; Epstein, Jeff N
2009-02-01
Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking > or = 15 cigarettes with (n=12) or without (n=14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions--a 'Satiated' condition wherein participants smoked up to and during the session; and an 'Abstinent' condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD < Control) and a Group x Session interaction (ADHD group RTs decreased when abstinent). RT standard deviation for the normal component of the distribution (sigma) showed no effects. The ex-Gaussian parameter tau, which describes the mean and standard deviation of the non-normal component of the distribution, showed significant effects of session (Abstinent > Satiated), Group x Session interaction (ADHD increased significantly under Abstinent condition compared to Control), and a trend toward a main effect of Group (ADHD > Control). Alternative approaches to analyzing RT data provide a more detailed description of the effects of smoking abstinence in ADHD and non-ADHD smokers and results differ from analyses using more traditional approaches. These findings have implications for understanding the neuropsychopharmacology of nicotine and nicotine withdrawal.
Evidence for criticality in financial data
NASA Astrophysics Data System (ADS)
Ruiz, G.; de Marcos, A. F.
2018-01-01
We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
NASA Technical Reports Server (NTRS)
Falls, L. W.
1975-01-01
Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
Reply to: Are There More Gifted People than Would Be Expected on a Normal Distribution?
ERIC Educational Resources Information Center
Gallagher, James J.
2014-01-01
The author responds to the article by Warne, Godwin, and Smith (2013) on the question of whether there are more gifted people than would be expected in a Gaussian normal distribution. He asserts that the answer to this question is yes, based on (a) data that he and his colleagues have collected, (b) data that are already available and quoted by…
The Best and the Rest: Revisiting the Norm of Normality of Individual Performance
ERIC Educational Resources Information Center
O'Boyle, Ernest, Jr.; Aguinis, Herman
2012-01-01
We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers, entertainers, politicians, and amateur and professional athletes.…
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Gaussian model for emission rate measurement of heated plumes using hyperspectral data
NASA Astrophysics Data System (ADS)
Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.
2018-02-01
This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.
1982-06-01
observation in our framework is the pair (y,x) with x considered given. The influence function for 52 at the Gaussian distribution with mean xB and variance...3/2 - (1+22)o2 2) 1+2x\\/2 x’) 2(3-9) (1+2X) This influence function is bounded in the residual y-xS, and redescends to an asymptote greater than...version of the influence function for B at the Gaussian distribution, given the x. and x, is defined as the normalized differenceJ (see Barnett and
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.
Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less
Estimating division and death rates from CFSE data
NASA Astrophysics Data System (ADS)
de Boer, Rob J.; Perelson, Alan S.
2005-12-01
The division tracking dye, carboxyfluorescin diacetate succinimidyl ester (CFSE) is currently the most informative labeling technique for characterizing the division history of cells in the immune system. Gett and Hodgkin (Nat. Immunol. 1 (2000) 239-244) have proposed to normalize CFSE data by the 2-fold expansion that is associated with each division, and have argued that the mean of the normalized data increases linearly with time, t, with a slope reflecting the division rate p. We develop a number of mathematical models for the clonal expansion of quiescent cells after stimulation and show, within the context of these models, under which conditions this approach is valid. We compare three means of the distribution of cells over the CFSE profile at time t: the mean, [mu](t), the mean of the normalized distribution, [mu]2(t), and the mean of the normalized distribution excluding nondivided cells, .In the simplest models, which deal with homogeneous populations of cells with constant division and death rates, the normalized frequency distribution of the cells over the respective division numbers is a Poisson distribution with mean [mu]2(t)=pt, where p is the division rate. The fact that in the data these distributions seem Gaussian is therefore insufficient to establish that the times at which cells are recruited into the first division have a Gaussian variation because the Poisson distribution approaches the Gaussian distribution for large pt. Excluding nondivided cells complicates the data analysis because , and only approaches a slope p after an initial transient.In models where the first division of the quiescent cells takes longer than later divisions, all three means have an initial transient before they approach an asymptotic regime, which is the expected [mu](t)=2pt and . Such a transient markedly complicates the data analysis. After the same initial transients, the normalized cell numbers tend to decrease at a rate e-dt, where d is the death rate.Nonlinear parameter fitting of CFSE data obtained from Gett and Hodgkin to ordinary differential equation (ODE) models with first-order terms for cell proliferation and death gave poor fits to the data. The Smith-Martin model with an explicit time delay for the deterministic phase of the cell cycle performed much better. Nevertheless, the insights gained from analysis of the ODEs proved useful as we showed by generating virtual CFSE data with a simulation model, where cell cycle times were drawn from various distributions, and then computing the various mean division numbers.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
A non-Gaussian approach to risk measures
NASA Astrophysics Data System (ADS)
Bormetti, Giacomo; Cisana, Enrica; Montagna, Guido; Nicrosini, Oreste
2007-03-01
Reliable calculations of financial risk require that the fat-tailed nature of prices changes is included in risk measures. To this end, a non-Gaussian approach to financial risk management is presented, modelling the power-law tails of the returns distribution in terms of a Student- t distribution. Non-Gaussian closed-form solutions for value-at-risk and expected shortfall are obtained and standard formulae known in the literature under the normality assumption are recovered as a special case. The implications of the approach for risk management are demonstrated through an empirical analysis of financial time series from the Italian stock market and in comparison with the results of the most widely used procedures of quantitative finance. Particular attention is paid to quantify the size of the errors affecting the market risk measures obtained according to different methodologies, by employing a bootstrap technique.
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
The semantic Stroop effect: An ex-Gaussian analysis.
White, Darcy; Risko, Evan F; Besner, Derek
2016-10-01
Previous analyses of the standard Stroop effect (which typically uses color words that form part of the response set) have documented effects on mean reaction times in hundreds of experiments in the literature. Less well known is the fact that ex-Gaussian analyses reveal that such effects are seen in (a) the mean of the normal distribution (mu), as well as in (b) the standard deviation of the normal distribution (sigma) and (c) the tail (tau). No ex-Gaussian analysis exists in the literature with respect to the semantically based Stroop effect (which contrasts incongruent color-associated words with, e.g., neutral controls). In the present experiments, we investigated whether the semantically based Stroop effect is also seen in the three ex-Gaussian parameters. Replicating previous reports, color naming was slower when the color was carried by an irrelevant (but incongruent) color-associated word (e.g., sky, tomato) than when the control items consisted of neutral words (e.g., keg, palace) in each of four experiments. An ex-Gaussian analysis revealed that this semantically based Stroop effect was restricted to the arithmetic mean and mu; no semantic Stroop effect was observed in tau. These data are consistent with the views (1) that there is a clear difference in the source of the semantic Stroop effect, as compared to the standard Stroop effect (evidenced by the presence vs. absence of an effect on tau), and (2) that interference associated with response competition on incongruent trials in tau is absent in the semantic Stroop effect.
Laser Raman detection for oral cancer based on a Gaussian process classification method
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-06-01
Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive.
Effect of polarization on the evolution of electromagnetic hollow Gaussian Schell-model beam
NASA Astrophysics Data System (ADS)
Long, Xuewen; Lu, Keqing; Zhang, Yuhong; Guo, Jianbang; Li, Kehao
2011-02-01
Based on the theory of coherence, an analytical propagation formula for partially polarized and partially coherent hollow Gaussian Schell-model beams (HGSMBs) passing through a paraxial optical system is derived. Furthermore, we show that the degree of polarization of source may affect the evolution of HGSMBs and a tunable dark region may exist. For two special cases of fully coherent and partially coherent δxx = δyy, normalized intensity distributions are independent of the polarization of source.
Some Modified Integrated Squared Error Procedures for Multivariate Normal Data.
1982-06-01
p-dimensional Gaussian. There are a number of measures of qualitative robustness but the most important is the influence function . Most of the other...measures are derived from the influence function . The influence function is simply proportional to the score function (Huber, 1981, p. 45 ). The... influence function at the p-variate Gaussian distribution Np (UV) is as -1P IC(x; ,N) = IE&) ;-") sD=XV = (I+c) (p+2)(x-p) exp(- ! (x-p) TV-.1-)) (3.6
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Two kinds of Airy-related beams
NASA Astrophysics Data System (ADS)
Xu, Yiqing; Zhou, Guoquan; Zhang, Lijun; Ru, Guoyun
2015-08-01
Two kinds of Airy-related beams are introduced in this manuscript. The normalized intensity distribution in the x-direction of the two kinds of Airy-related beams is close to that of the Gaussian beam. The normalized intensity distribution in the y-direction of the two kinds of Airy-related beams is close to that of the second-order and the third-order elegant Hermite-Gaussian beams, respectively. Analytical expressions of the two kinds of Airy-related beams passing through an ABCD paraxial optical system are derived. The beam propagation factors for the two kinds of Airy-related beams are 1.933 and 2.125, respectively. Analytical expressions of the beam half widths and the kurtosis parameters of the two kinds of Airy-related beams passing through an ABCD paraxial optical system are also presented. As a numerical example, the propagation properties of the two kinds of Airy-related beams are demonstrated in free space. Moreover, the comparison between the two kinds of Airy-related beams and their corresponding elegant Hermite-Gaussian beams along the two transverse directions are performed in detail. Upon propagation, the former kind of Airy-related beam will evolve from the central bright beam into the dark hollow beam. Contrarily, the latter kind of Airy-related beam will evolve from the dark hollow beam into the central bright beam. These two kinds of Airy-related beams can be used to describe specially distributed beams.
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1976-01-01
A study was made of the field size distributions for LACIE test sites 5029, 5033, and 5039, People's Republic of China. Field lengths and widths were measured from LANDSAT imagery, and field area was statistically modeled. Field size parameters have log-normal or Poisson frequency distributions. These were normalized to the Gaussian distribution and theoretical population curves were made. When compared to fields in other areas of the same country measured in the previous study, field lengths and widths in the three LACIE test sites were 2 to 3 times smaller and areas were smaller by an order of magnitude.
A note on `Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions'
NASA Astrophysics Data System (ADS)
Kwong, Hok Shing; Nadarajah, Saralees
2018-01-01
Tarnopolski [Monthly Notices of the Royal Astronomical Society, 458 (2016) 2024-2031] analysed data sets on gamma-ray burst durations using skew distributions. He showed that the best fits are provided by two skew normal and three Gaussian distributions. Here, we suggest other distributions, including some that are heavy tailed. At least one of these distributions is shown to provide better fits than those considered in Tarnopolski. Five criteria are used to assess best fits.
Langevin equation with fluctuating diffusivity: A two-state model
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji
2016-07-01
Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Evans, Philip G.; Grice, Warren P.
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
Qi, Bing; Evans, Philip G.; Grice, Warren P.
2018-01-01
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
The Non-Gaussian Nature of Prostate Motion Based on Real-Time Intrafraction Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yuting; Liu, Tian; Yang, Wells
2013-10-01
Purpose: The objective of this work is to test the validity of the Gaussian approximation for prostate motion through characterization of its spatial distribution. Methods and Materials: Real-time intrafraction prostate motion was observed using Calypso 4-dimensional (4D) nonradioactive electromagnetic tracking system. We report the results from a total of 1024 fractions from 31 prostate cancer patients. First, the correlation of prostate motion in right/left (RL), anteroposterior (AP), and superoinferior (SI) direction were determined using Pearson's correlation of coefficient. Then the spatial distribution of prostate motion was analyzed for individual fraction, individual patient including all fractions, and all patients including allmore » fractions. The displacement in RL, AP, SI, oblique, or total direction is fitted into a Gaussian distribution, and a Lilliefors test was used to evaluate the validity of the hypothesis that the displacement is normally distributed. Results: There is high correlation in AP/SI direction (61% of fractions with medium or strong correlation). This is consistent with the longitudinal oblique motion of the prostate, and likely the effect from respiration on an organ confined within the genitourinary diaphragm with the rectum sitting posteriorly and bladder sitting superiorly. In all directions, the non-Gaussian distribution is more common for individual fraction, individual patient including all fractions, and all patients including all fractions. The spatial distribution of prostate motion shows an elongated shape in oblique direction, indicating a higher range of motion in the AP and SI directions. Conclusions: Our results showed that the prostate motion is highly correlated in AP and SI direction, indicating an oblique motion preference. In addition, the spatial distribution of prostate motion is elongated in an oblique direction, indicating that the organ motion dosimetric modeling using Gaussian kernel may need to be modified to account for the particular organ motion character of prostate.« less
The non-Gaussian nature of prostate motion based on real-time intrafraction tracking.
Lin, Yuting; Liu, Tian; Yang, Wells; Yang, Xiaofeng; Khan, Mohammad K
2013-10-01
The objective of this work is to test the validity of the Gaussian approximation for prostate motion through characterization of its spatial distribution. Real-time intrafraction prostate motion was observed using Calypso 4-dimensional (4D) nonradioactive electromagnetic tracking system. We report the results from a total of 1024 fractions from 31 prostate cancer patients. First, the correlation of prostate motion in right/left (RL), anteroposterior (AP), and superoinferior (SI) direction were determined using Pearson's correlation of coefficient. Then the spatial distribution of prostate motion was analyzed for individual fraction, individual patient including all fractions, and all patients including all fractions. The displacement in RL, AP, SI, oblique, or total direction is fitted into a Gaussian distribution, and a Lilliefors test was used to evaluate the validity of the hypothesis that the displacement is normally distributed. There is high correlation in AP/SI direction (61% of fractions with medium or strong correlation). This is consistent with the longitudinal oblique motion of the prostate, and likely the effect from respiration on an organ confined within the genitourinary diaphragm with the rectum sitting posteriorly and bladder sitting superiorly. In all directions, the non-Gaussian distribution is more common for individual fraction, individual patient including all fractions, and all patients including all fractions. The spatial distribution of prostate motion shows an elongated shape in oblique direction, indicating a higher range of motion in the AP and SI directions. Our results showed that the prostate motion is highly correlated in AP and SI direction, indicating an oblique motion preference. In addition, the spatial distribution of prostate motion is elongated in an oblique direction, indicating that the organ motion dosimetric modeling using Gaussian kernel may need to be modified to account for the particular organ motion character of prostate. Copyright © 2013 Elsevier Inc. All rights reserved.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2016-12-01
It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.
Kollins, Scott H.; McClernon, F. Joseph; Epstein, Jeff N.
2009-01-01
Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking ≥15 cigarettes with (n = 12) or without (n = 14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions—a ‘Satiated’ condition wherein participants smoked up to and during the session; and an ‘Abstinent’ condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD
Liu, Chengyu; Zheng, Dingchang; Zhao, Lina; Liu, Changchun
2014-01-01
It has been reported that Gaussian functions could accurately and reliably model both carotid and radial artery pressure waveforms (CAPW and RAPW). However, the physiological relevance of the characteristic features from the modeled Gaussian functions has been little investigated. This study thus aimed to determine characteristic features from the Gaussian functions and to make comparisons of them between normal subjects and heart failure patients. Fifty-six normal subjects and 51 patients with heart failure were studied with the CAPW and RAPW signals recorded simultaneously. The two signals were normalized first and then modeled by three positive Gaussian functions, with their peak amplitude, peak time, and half-width determined. Comparisons of these features were finally made between the two groups. Results indicated that the peak amplitude of the first Gaussian curve was significantly decreased in heart failure patients compared with normal subjects (P<0.001). Significantly increased peak amplitude of the second Gaussian curves (P<0.001) and significantly shortened peak times of the second and third Gaussian curves (both P<0.001) were also presented in heart failure patients. These results were true for both CAPW and RAPW signals, indicating the clinical significance of the Gaussian modeling, which should provide essential tools for further understanding the underlying physiological mechanisms of the artery pressure waveform.
NASA Astrophysics Data System (ADS)
Xu, Xue-Xiang; Yuan, Hong-Chun; Wang, Yan
2014-07-01
We investigate the nonclassical properties of arbitrary number photon annihilation-then-creation operation (AC) and creation-then-annihilation operation (CA) to the thermal state (TS), whose normalization factors are related to the polylogarithm function. Then we compare their quantum characters, such as photon number distribution, average photon number, Mandel Q-parameter, purity and the Wigner function. Because of the noncommutativity between the annihilation operator and the creation operator, the ACTS and the CATS have different nonclassical properties. It is found that nonclassical properties are exhibited more strongly after AC than after CA. In addition we also examine their non-Gaussianity. The result shows that the ACTS can present a slightly bigger non-Gaussianity than the CATS.
Muthu Rama Krishnan, M; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K
2012-04-01
The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.
Photon-number statistics in resonance fluorescence
NASA Astrophysics Data System (ADS)
Lenstra, D.
1982-12-01
The theory of photon-number statistics in resonance fluorescence is treated, starting with the general formula for the emission probability of n photons during a given time interval T. The results fully confirm formerly obtained results by Cook that were based on the theory of atomic motion in a traveling wave. General expressions for the factorial moments are derived and explicit results for the mean and the variance are given. It is explicitly shown that the distribution function tends to a Gaussian when T becomes much larger than the natural lifetime of the excited atom. The speed of convergence towards the Gaussian is found to be typically slow, that is, the third normalized central moment (or the skewness) is proportional to T-12. However, numerical results illustrate that the overall features of the distribution function are already well represented by a Gaussian when T is larger than a few natural lifetimes only, at least if the intensity of the exciting field is not too small and its detuning is not too large.
Statistical dynamics of regional populations and economies
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Hao, Rui; Wang, Peng
Quantitative analysis of human behavior and social development is becoming a hot spot of some interdisciplinary studies. A statistical analysis on the population and GDP of 150 cities in China from 1990 to 2013 is conducted. The result indicates the cumulative probability distribution of the populations and that of the GDPs obeying the shifted power law, respectively. In order to understand these characteristics, a generalized Langevin equation describing variation of population is proposed, which is based on the correlations between population and GDP as well as the random fluctuations of the related factors. The equation is transformed into the Fokker-Plank equation to express the evolution of population distribution. The general solution demonstrates a transition of the distribution from the normal Gaussian distribution to a shifted power law, which suggests a critical point of time at which the transition takes place. The shifted power law distribution in the supercritical situation is qualitatively in accordance with the practical result. The distribution of the GDPs is derived from the well-known Cobb-Douglas production function. The result presents a change, in supercritical situation, from a shifted power law to the Gaussian distribution. This is a surprising result-the regional GDP distribution of our world will be the Gaussian distribution one day in the future. The discussions based on the changing trend of economic growth suggest it will be true. Therefore, these theoretical attempts may draw a historical picture of our society in the aspects of population and economy.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Distribution of scholarly publications among academic radiology departments.
Morelli, John N; Bokhari, Danial
2013-03-01
The aim of this study was to determine whether the distribution of publications among academic radiology departments in the United States is Gaussian (ie, the bell curve) or Paretian. The search affiliation feature of the PubMed database was used to search for publications in 3 general radiology journals with high Impact Factors, originating at radiology departments in the United States affiliated with residency training programs. The distribution of the number of publications among departments was examined using χ(2) test statistics to determine whether it followed a Pareto or a Gaussian distribution more closely. A total of 14,219 publications contributed since 1987 by faculty members in 163 departments with residency programs were available for assessment. The data acquired were more consistent with a Pareto (χ(2) = 80.4) than a Gaussian (χ(2) = 659.5) distribution. The mean number of publications for departments was 79.9 ± 146 (range, 0-943). The median number of publications was 16.5. The majority (>50%) of major radiology publications from academic departments with residency programs originated in <10% (n = 15 of 178) of such departments. Fifteen programs likewise produced no publications in the surveyed journals. The number of publications in journals with high Impact Factors published by academic radiology departments more closely fits a Pareto rather than a normal distribution. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2016-12-01
Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.
φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations
NASA Astrophysics Data System (ADS)
Sornette, D.; Simonetti, P.; Andersen, J. V.
2000-08-01
Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive empirical tests are presented on the foreign exchange market that validate satisfactorily the theory. For “fat tail” distributions, we show that an adequate prediction of the risks of a portfolio relies much more on the correct description of the tail structure rather than on their correlations. For the case of asymmetric return distributions, our theory allows us to generalize the return-risk efficient frontier concept to incorporate the dimensions of large risks embedded in the tail of the asset distributions. We demonstrate that it is often possible to increase the portfolio return while decreasing the large risks as quantified by the fourth and higher-order cumulants. Exact theoretical formulas are validated by empirical tests.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
Rockfall travel distances theoretical distributions
NASA Astrophysics Data System (ADS)
Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea
2017-04-01
The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.
Manual choice reaction times in the rate-domain
Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean
2014-01-01
Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar
NASA Technical Reports Server (NTRS)
Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.
Zheng, Xiliang; Wang, Jin
2015-01-01
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453
Dekkers, A L M; Slob, W
2012-10-01
In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
Nicolás, R O
1987-09-15
Different optical analysis of cylindrical-parabolic concentrators were made by utilizing four models of intensity distribution of the solar disk, i.e., square, uniform, real, and Gaussian. In this paper, the validity conditions using such distributions are determined by calculating each model of the intensity distribution on the receiver plane of perfect and nonperfect cylindrical-parabolic concentrators. We call nonperfect concentrators those in which the normal to each differential element of the specular surface departs from its correct position by an angle sigma(epsilon), the possible values of which follow a Gaussian distribution of mean value epsilon and standard deviation sigma(epsilon). In particular, the results obtained with the models considered for a concentrator with an aperture half-angle of 45 degrees are shown and compared. An important conclusion is that for sigma(epsilon) greater, similar 4 mrad, in some cases for sigma(epsilon) greater, similar 2 mrad, the results obtained are practically independent of the model used.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Statistical Modeling of Retinal Optical Coherence Tomography.
Amini, Zahra; Rabbani, Hossein
2016-06-01
In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.
Non-Gaussian noise-weakened stability in a foraging colony system with time delay
NASA Astrophysics Data System (ADS)
Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong
2018-02-01
In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.
Semi-nonparametric VaR forecasts for hedge funds during the recent crisis
NASA Astrophysics Data System (ADS)
Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier
2014-05-01
The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
Fiori, Aldo; Volpi, Elena; Zarlenga, Antonio; Bohling, Geoffrey C
2015-08-01
The impact of the logconductivity (Y=ln K) distribution fY on transport at the MADE site is analyzed. Our principal interest is in non-Gaussian fY characterized by heavier tails than the Gaussian. Both the logconductivity moments and fY itself are inferred, taking advantage of the detailed measurements of Bohling et al. (2012). The resulting logconductivity distribution displays heavier tails than the Gaussian, although the departure from Gaussianity is not significant. The effect of the logconductivity distribution on the breakthrough curve (BTC) is studied through an analytical, physically based model. It is found that the non-Gaussianity of the MADE logconductivity distribution does not strongly affect the BTC. Counterintuitively, assuming heavier tailed distributions for Y, with same variance, leads to BTCs which are more symmetrical than those for the Gaussian fY, with less pronounced preferential flow. Results indicate that the impact of strongly non-Gaussian, heavy tailed distributions on solute transport in heterogeneous porous formations can be significant, especially in the presence of high heterogeneity, resulting in reduced preferential flow and retarded peak arrivals. Copyright © 2015 Elsevier B.V. All rights reserved.
Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.
Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark
2015-09-01
"Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
NASA Astrophysics Data System (ADS)
Tang, Bin; Jiang, ShengBao; Jiang, Chun; Zhu, Haibin
2014-07-01
A hollow sinh-Gaussian beam (HsG) is an appropriate model to describe the dark-hollow beam. Based on Collins integral formula and the fact that a hard-edged-aperture function can be expanded into a finite sum of complex Gaussian functions, the propagation properties of a HsG beam passing through fractional Fourier transform (FRFT) optical systems with and without apertures have been studied in detail by some typical numerical examples. The results obtained using the approximate analytical formula are in good agreement with those obtained using numerical integral calculation. Further, the studies indicate that the normalized intensity distribution of the HsG beam in FRFT plane is closely related with not only the fractional order but also the beam order and the truncation parameter. The FRFT optical systems provide a convenient way for laser beam shaping.
Experimental study on infrared radiation temperature field of concrete under uniaxial compression
NASA Astrophysics Data System (ADS)
Lou, Quan; He, Xueqiu
2018-05-01
Infrared thermography, as a nondestructive, non-contact and real-time monitoring method, has great significance in assessing the stability of concrete structure and monitoring its failure. It is necessary to conduct in depth study on the mechanism and application of infrared radiation (IR) of concrete failure under loading. In this paper, the concrete specimens with size of 100 × 100 × 100 mm were adopted to carry out the uniaxial compressions for the IR tests. The distribution of IR temperatures (IRTs), surface topography of IRT field and the reconstructed IR images were studied. The results show that the IRT distribution follows the Gaussian distribution, and the R2 of Gaussian fitting changes along with the loading time. The abnormities of R2 and AE counts display the opposite variation trends. The surface topography of IRT field is similar to the hyperbolic paraboloid, which is related to the stress distribution in the sample. The R2 of hyperbolic paraboloid fitting presents an upward trend prior to the fracture which enables to change the IRT field significantly. This R2 has a sharp drop in response to this large destruction. The normalization images of IRT field, including the row and column normalization images, were proposed as auxiliary means to analyze the IRT field. The row and column normalization images respectively show the transverse and longitudinal distribution of the IRT field, and they have clear responses to the destruction occurring on the sample surface. In this paper, the new methods and quantitative index were proposed for the analysis of IRT field, which have some theoretical and instructive significance for the analysis of the characteristics of IRT field, as well as the monitoring of instability and failure for concrete structure.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Simple reaction time in 8-9-year old children environmentally exposed to PCBs.
Šovčíková, Eva; Wimmerová, Soňa; Strémy, Maximilián; Kotianová, Janette; Loffredo, Christopher A; Murínová, Ľubica Palkovičová; Chovancová, Jana; Čonka, Kamil; Lancz, Kinga; Trnovec, Tomáš
2015-12-01
Simple reaction time (SRT) has been studied in children exposed to polychlorinated biphenyls (PCBs), with variable results. In the current work we examined SRT in 146 boys and 161 girls, aged 8.53 ± 0.65 years (mean ± SD), exposed to PCBs in the environment of eastern Slovakia. We divided the children into tertiles with regard to increasing PCB serum concentration. The mean ± SEM serum concentration of the sum of 15 PCB congeners was 191.15 ± 5.39, 419.23 ± 8.47, and 1315.12 ± 92.57 ng/g lipids in children of the first, second, and third tertiles, respectively. We created probability distribution plots for each child from their multiple trials of the SRT testing. We fitted response time distributions from all valid trials with the ex-Gaussian function, a convolution of a normal and an additional exponential function, providing estimates of three independent parameters μ, σ, and τ. μ is the mean of the normal component, σ is the standard deviation of the normal component, and τ is the mean of the exponential component. Group response time distributions were calculated using the Vincent averaging technique. A Q-Q plot comparing probability distribution of the first vs. third tertile indicated that deviation of the quantiles of the latter tertile from those of the former begins at the 40th percentile and does not show a positive acceleration. This was confirmed in comparison of the ex-Gaussian parameters of these two tertiles adjusted for sex, age, Raven IQ of the child, mother's and father's education, behavior at home and school, and BMI: the results showed that the parameters μ and τ significantly (p ≤ 0.05) increased with PCB exposure. Similar increases of the ex-Gaussian parameter τ in children suffering from ADHD have been previously reported and interpreted as intermittent attentional lapses, but were not seen in our cohort. Our study has confirmed that environmental exposure of children to PCBs is associated with prolongation of simple reaction time reflecting impairment of cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Statistical studies of animal response data from USF toxicity screening test method
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Machado, A. M.
1978-01-01
Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.
A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.
2014-12-01
The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
NASA Astrophysics Data System (ADS)
Csillik, O.; Evans, I. S.; Drăguţ, L.
2015-03-01
Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Evaluation of non-Gaussian diffusion in cardiac MRI.
McClymont, Darryl; Teh, Irvin; Carruth, Eric; Omens, Jeffrey; McCulloch, Andrew; Whittington, Hannah J; Kohl, Peter; Grau, Vicente; Schneider, Jürgen E
2017-09-01
The diffusion tensor model assumes Gaussian diffusion and is widely applied in cardiac diffusion MRI. However, diffusion in biological tissue deviates from a Gaussian profile as a result of hindrance and restriction from cell and tissue microstructure, and may be quantified better by non-Gaussian modeling. The aim of this study was to investigate non-Gaussian diffusion in healthy and hypertrophic hearts. Thirteen rat hearts (five healthy, four sham, four hypertrophic) were imaged ex vivo. Diffusion-weighted images were acquired at b-values up to 10,000 s/mm 2 . Models of diffusion were fit to the data and ranked based on the Akaike information criterion. The diffusion tensor was ranked best at b-values up to 2000 s/mm 2 but reflected the signal poorly in the high b-value regime, in which the best model was a non-Gaussian "beta distribution" model. Although there was considerable overlap in apparent diffusivities between the healthy, sham, and hypertrophic hearts, diffusion kurtosis and skewness in the hypertrophic hearts were more than 20% higher in the sheetlet and sheetlet-normal directions. Non-Gaussian diffusion models have a higher sensitivity for the detection of hypertrophy compared with the Gaussian model. In particular, diffusion kurtosis may serve as a useful biomarker for characterization of disease and remodeling in the heart. Magn Reson Med 78:1174-1186, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Klimenko, V. V.
2017-12-01
We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.
High-order space charge effects using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996
1997-02-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Non-Gaussian statistics of soliton timing jitter induced by amplifier noise.
Ho, Keang-Po
2003-11-15
Based on first-order perturbation theory of the soliton, the Gordon-Haus timing jitter induced by amplifier noise is found to be non-Gaussian distributed. Both frequency and timing jitter have larger tail probabilities than Gaussian distribution given by the linearized perturbation theory. The timing jitter has a larger discrepancy from Gaussian distribution than does the frequency jitter.
NASA Astrophysics Data System (ADS)
Wang, Haiyan; Li, Xiangyin
2010-01-01
Normalized intensity distribution, the complex degree of coherence and power in the bucket for partially coherent controllable dark hollow beams (DHBs) with various symmetries propagating in atmospheric turbulence are derived using tensor method and investigated in detail. Analytical results show that, after sufficient propagation distance, partially coherent DHBs with various symmetries eventually become circular Gaussian beam (without dark hollow) in turbulent atmosphere, which is different from its propagation properties in free space. The partially coherent DHBs return to a circular Gaussian beam rapidly for stronger turbulence, higher coherence, lower beam order, smaller p or smaller beam waist width. Another interesting observation is that the profile of the complex degree of coherence attains a similar profile to that of the average intensity of the related beam propagating in a turbulent atmosphere. Besides the laser power focusablity of DHBs are better than that of Gaussian beam propagating in turbulent atmosphere.
ERIC Educational Resources Information Center
Ivancheva, Ludmila E.
2001-01-01
Discusses the concept of the hyperbolic or skew distribution as a universal statistical law in information science and socioeconomic studies. Topics include Zipf's law; Stankov's universal law; non-Gaussian distributions; and why most bibliometric and scientometric laws reveal characters of non-Gaussian distribution. (Author/LRW)
Fundamentals of Research Data and Variables: The Devil Is in the Details.
Vetter, Thomas R
2017-10-01
Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).
Automated Weather Observing System (AWOS) Demonstration Program.
1984-09-01
month "bur:-in" r "debugging" period and a 10-month ’usefu I life " period. Fhe butrn- in pr i ,J was i sed to establish the Data Acquisition System...Histograms. Histograms provide a graphical means of showing how well the probability distribution of residu : , approaches a normal or Gaussian distribution...Organization Report No. 7- Author’s) Paul .J. O t Brien et al. DOT/FAA/CT-84/20 9. Performing Organlzation Name and Address 10. Work Unit No. (TRAIS
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Deformation structure analysis of material at fatigue on the basis of the vector field
NASA Astrophysics Data System (ADS)
Kibitkin, Vladimir V.; Solodushkin, Andrey I.; Pleshanov, Vasily S.
2017-12-01
In the paper, spatial distributions of deformation, circulation, and shear amplitudes and shear angles are obtained from the displacement vector field measured by the DIC technique. This vector field and its characteristics of shears and vortices are given as an example of such approach. The basic formulae are also given. The experiment shows that honeycomb deformation structures can arise in the center of a macrovortex at developed plastic flow. The spatial distribution of local circulation and shears is discovered, which coincides with the deformation structure but their amplitudes are different. The analysis proves that the spatial distribution of shear angles is a result of maximum tangential and normal stresses. The anticlockwise circulation of most local vortices obeys the normal Gaussian law in the area of interest.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1998-01-01
A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.
NASA Astrophysics Data System (ADS)
Zhu, Kaicheng; Tang, Huiqin; Tang, Ying; Xia, Hui
2014-12-01
We proposed a scheme that converts a sine-Gaussian beam with an edge dislocation into a dark hollow beam with a vortex. Based on the gyrator transform (GT) relation, the closed-form field distribution of generalized sine-Gaussian beams passing through a GT system is derived; the intensity distribution and the corresponding phase distribution associated with the transforming generalized sine-Gaussian beams are analyzed. According to the numerical method, the distributions are graphically demonstrated and found that, for appropriate beam parameters and the GT angle, dark hollow vortex beams with topological charge 1 can be achieved using sine-Gaussian beams carrying an edge dislocation. Moreover, the orbital angular momentum content of a GT sine-Gaussian beam is analyzed. It is proved that the GT retains the odd- or even-order spiral harmonics structures of generalized sine-Gaussian beams in the transform process. In particular, it is wholly possible to convert an edge dislocation embedded in sine-Gaussian beams into a vortex with GT. The study also reveals that to obtain a dark hollow beam making use of GT of cos-Gaussian beams is impossible.
Non-Gaussian structure of B-mode polarization after delensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya; Nagata, Ryo, E-mail: namikawa@slac.stanford.edu, E-mail: rnagata@post.kek.jp
2015-10-01
The B-mode polarization of the cosmic microwave background on large scales has been considered as a probe of gravitational waves from the cosmic inflation. Ongoing and future experiments will, however, suffer from contamination due to the B-modes of non-primordial origins, one of which is the lensing induced B-mode polarization. Subtraction of the lensing B-modes, usually referred to as delensing, will be required for further improvement of detection sensitivity of the gravitational waves. In such experiments, knowledge of statistical properties of the B-modes after delensing is indispensable to likelihood analysis particularly because the lensing B-modes are known to be non-Gaussian. Inmore » this paper, we study non-Gaussian structure of the delensed B-modes on large scales, comparing it with that of the lensing B-modes. In particular, we investigate the power spectrum correlation matrix and the probability distribution function (PDF) of the power spectrum amplitude. Assuming an experiment in which the quadratic delensing is an almost optimal method, we find that delensing reduces correlations of the lensing B-mode power spectra between different multipoles, and that the PDF of the power spectrum amplitude is well described as a normal distribution function with a variance larger than that in the case of a Gaussian field. These features are well captured by an analytic model based on the 4th order Edgeworth expansion. As a consequence of the non-Gaussianity, the constraint on the tensor-to-scalar ratio after delensing is degraded within approximately a few percent, which depends on the multipole range included in the analysis.« less
Non-Gaussian structure of B-mode polarization after delensing
NASA Astrophysics Data System (ADS)
Namikawa, Toshiya; Nagata, Ryo
2015-10-01
The B-mode polarization of the cosmic microwave background on large scales has been considered as a probe of gravitational waves from the cosmic inflation. Ongoing and future experiments will, however, suffer from contamination due to the B-modes of non-primordial origins, one of which is the lensing induced B-mode polarization. Subtraction of the lensing B-modes, usually referred to as delensing, will be required for further improvement of detection sensitivity of the gravitational waves. In such experiments, knowledge of statistical properties of the B-modes after delensing is indispensable to likelihood analysis particularly because the lensing B-modes are known to be non-Gaussian. In this paper, we study non-Gaussian structure of the delensed B-modes on large scales, comparing it with that of the lensing B-modes. In particular, we investigate the power spectrum correlation matrix and the probability distribution function (PDF) of the power spectrum amplitude. Assuming an experiment in which the quadratic delensing is an almost optimal method, we find that delensing reduces correlations of the lensing B-mode power spectra between different multipoles, and that the PDF of the power spectrum amplitude is well described as a normal distribution function with a variance larger than that in the case of a Gaussian field. These features are well captured by an analytic model based on the 4th order Edgeworth expansion. As a consequence of the non-Gaussianity, the constraint on the tensor-to-scalar ratio after delensing is degraded within approximately a few percent, which depends on the multipole range included in the analysis.
Non-Gaussian structure of B-mode polarization after delensing
Namikawa, Toshiya; Nagata, Ryo
2015-10-01
The B-mode polarization of the cosmic microwave background on large scales has been considered as a probe of gravitational waves from the cosmic inflation. Ongoing and future experiments will, however, suffer from contamination due to the B-modes of non-primordial origins, one of which is the lensing induced B-mode polarization. Subtraction of the lensing B-modes, usually referred to as delensing, will be required for further improvement of detection sensitivity of the gravitational waves. In such experiments, knowledge of statistical properties of the B-modes after delensing is indispensable to likelihood analysis particularly because the lensing B-modes are known to be non-Gaussian. Inmore » this paper, we study non-Gaussian structure of the delensed B-modes on large scales, comparing it with that of the lensing B-modes. In particular, we investigate the power spectrum correlation matrix and the probability distribution function (PDF) of the power spectrum amplitude. Assuming an experiment in which the quadratic delensing is an almost optimal method, we find that delensing reduces correlations of the lensing B-mode power spectra between different multipoles, and that the PDF of the power spectrum amplitude is well described as a normal distribution function with a variance larger than that in the case of a Gaussian field. These features are well captured by an analytic model based on the 4th order Edgeworth expansion. Furthermore, as a consequence of the non-Gaussianity, the constraint on the tensor-to-scalar ratio after delensing is degraded within approximately a few percent, which depends on the multipole range included in the analysis.« less
Irradiation direction from texture
NASA Astrophysics Data System (ADS)
Koenderink, Jan J.; Pont, Sylvia C.
2003-10-01
We present a theory of image texture resulting from the shading of corrugated (three-dimensional textured) surfaces, Lambertian on the micro scale, in the domain of geometrical optics. The derivation applies to isotropic Gaussian random surfaces, under collimated illumination, in normal view. The theory predicts the structure tensors from either the gradient or the Hessian of the image intensity and allows inferences of the direction of irradiation of the surface. Although the assumptions appear prima facie rather restrictive, even for surfaces that are not at all Gaussian, with the bidirectional reflectance distribution function far from Lambertian and vignetting and multiple scattering present, we empirically recover the direction of irradiation with an accuracy of a few degrees.
Super-resolving random-Gaussian apodized photon sieve.
Sabatyan, Arash; Roshaninejad, Parisa
2012-09-10
A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
In vitro tympanic membrane position identification with a co-axial fiber-optic otoscope
NASA Astrophysics Data System (ADS)
Sundberg, Mikael; Peebo, Markus; Strömberg, Tomas
2011-09-01
Otitis media diagnosis can be assisted by measuring the shape of the tympanic membrane. We have developed an ear speculum for an otoscope, including spatially distributed source and detector optical fibers, to generate source-detector intensity matrices (SDIMs), representing the curvature of surfaces. The surfaces measured were a model ear with a latex membrane and harvested temporal bones including intact tympanic membranes. The position of the tympanic membrane was shifted from retracted to bulging by air pressure and that of the latex membrane by water displacement. The SDIM was normalized utilizing both external (a sheared flat plastic cylinder) and internal references (neutral position of the membrane). Data was fitted to a two-dimensional Gaussian surface representing the shape by its amplitude and offset. Retracted and bulging surfaces were discriminated for the model ear by the sign of the Gaussian amplitude for both internal and external reference normalization. Tympanic membranes were separated after a two-step normalization: first to an external reference, adjusted for the distance between speculum and the surfaces, and second by comparison with an average normally positioned SDIM from tympanic membranes. In conclusion, we have shown that the modified otoscope can discriminate between bulging and retracted tympanic membranes in a single measurement, given a two-step normalization.
Symmetric co-movement between Malaysia and Japan stock markets
NASA Astrophysics Data System (ADS)
Razak, Ruzanna Ab; Ismail, Noriszura
2017-04-01
The copula approach is a flexible tool known to capture linear, nonlinear, symmetric and asymmetric dependence between two or more random variables. It is often used as a co-movement measure between stock market returns. The information obtained from copulas such as the level of association of financial market during normal and bullish and bearish markets phases are useful for investment strategies and risk management. However, the study of co-movement between Malaysia and Japan markets are limited, especially using copulas. Hence, we aim to investigate the dependence structure between Malaysia and Japan capital markets for the period spanning from 2000 to 2012. In this study, we showed that the bivariate normal distribution is not suitable as the bivariate distribution or to present the dependence between Malaysia and Japan markets. Instead, Gaussian or normal copula was found a good fit to represent the dependence. From our findings, it can be concluded that simple distribution fitting such as bivariate normal distribution does not suit financial time series data, whose characteristics are often leptokurtic. The nature of the data is treated by ARMA-GARCH with heavy tail distributions and these can be associated with copula functions. Regarding the dependence structure between Malaysia and Japan markets, the findings suggest that both markets co-move concurrently during normal periods.
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Wake loss and energy spread factor of the LEReC Booster cavity caused by short range wake field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Binping; Blaskiewicz, Michael; Fedotov, Alexei
LEReC project uses a DC photoemission gun with multi-alkali (CsK 2Sb or NaK 2Sb) cathode [1]. To get 24 mm “flat-top” distribution, 32 Gaussian laser bunches with 0.6 mm rms length are stacked together with 0.75 mm distance [2]. In this case one cannot simply use a 1 cm rms length Gaussian/step/delta bunch for short range wake field simulation since a 0.6 mm bunch contains frequency much higher than the 1 cm bunch. A short range wake field simulation was done using CST Particle Studio™ with 0.6 mm rms Gaussian bunch at the speed of light, and this result wasmore » compared with the result for 1 cm rms Gaussian bunch in Figure 1, from where one notice that the wake potential for the 0.6 mm bunch is ~10 times higher than that of the 1 cm bunch. The wake potential of the 0.6 mm bunch, as well as the charge distribution, was then “shift and stack” every 0.75 mm, the normalized results are shown in Figure 2. The wake loss factor (WLF) is the integration of the product of wake potential and normalized bunch charge, and the energy spread factor (ESF) is the rms deviation from the average energy loss. It is calculated by summing the weighted squares of the differences and taking the square root of the sum. These two factors were then divided by β 2 for 1.6 MV beam energy. The wake loss factor is at 0.86 V/pC and energy spread factor is at 0.54 V/pC rms. With 100 pC electron bunch, the energy spread inter-bunch is 54 V rms.« less
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Higher order derivatives of R-Jacobi polynomials
NASA Astrophysics Data System (ADS)
Das, Sourav; Swaminathan, A.
2016-06-01
In this work, the R-Jacobi polynomials defined on the nonnegative real axis related to F-distribution are considered. Using their Sturm-Liouville system higher order derivatives are constructed. Orthogonality property of these higher ordered R-Jacobi polynomials are obtained besides their normal form, self-adjoint form and hypergeometric representation. Interesting results on the Interpolation formula and Gaussian quadrature formulae are obtained with numerical examples.
Full-wave generalizations of the fundamental Gaussian beam.
Seshadri, S R
2009-12-01
The basic full wave corresponding to the fundamental Gaussian beam was discovered for the outwardly propagating wave in a half-space by the introduction of a source in the complex space. There is a class of extended full waves all of which reduce to the same fundamental Gaussian beam in the appropriate limit. For the extended full Gaussian waves that include the basic full Gaussian wave as a special case, the sources are in the complex space on different planes transverse to the propagation direction. The sources are cylindrically symmetric Gaussian distributions centered at the origin of the transverse planes, the axis of symmetry being the propagation direction. For the special case of the basic full Gaussian wave, the source is a point source. The radiation intensity of the extended full Gaussian waves is determined and their characteristics are discussed and compared with those of the fundamental Gaussian beam. The extended full Gaussian waves are also obtained for the oppositely propagating outwardly directed waves in the second half-space. The radiation intensity distributions in the two half-spaces have reflection symmetry about the midplane. The radiation intensity distributions of the various extended full Gaussian waves are not significantly different. The power carried by the extended full Gaussian waves is evaluated and compared with that of the fundamental Gaussian beam.
NASA Astrophysics Data System (ADS)
Ovreas, L.; Quince, C.; Sloan, W.; Lanzen, A.; Davenport, R.; Green, J.; Coulson, S.; Curtis, T.
2012-12-01
Arctic microbial soil communities are intrinsically interesting and poorly characterised. We have inferred the diversity and species abundance distribution of 6 Arctic soils: new and mature soil at the foot of a receding glacier, Arctic Semi Desert, the foot of bird cliffs and soil underlying Arctic Tundra Heath: all near Ny-Ålesund, Spitsbergen. Diversity, distribution and sample sizes were estimated using the rational method of Quince et al., (Isme Journal 2 2008:997-1006) to determine the most plausible underlying species abundance distribution. A log-normal species abundance curve was found to give a slightly better fit than an inverse Gaussian curve if, and only if, sequencing error was removed. The median estimates of diversity of operational taxonomic units (at the 3% level) were 3600-5600 (lognormal assumed) and 2825-4100 (inverse Gaussian assumed). The nature and origins of species abundance distributions are poorly understood but may yet be grasped by observing and analysing such distributions in the microbial world. The sample size required to observe the distribution (by sequencing 90% of the taxa) varied between ~ 106 and ~105 for the lognormal and inverse Gaussian respectively. We infer that between 5 and 50 GB of sequencing would be required to capture 90% or the metagenome. Though a principle components analysis clearly divided the sites into three groups there was a high (20-45%) degree of overlap in between locations irrespective of geographical proximity. Interestingly, the nearest relatives of the most abundant taxa at a number of most sites were of alpine or polar origin. Samples plotted on first two principal components together with arbitrary discriminatory OTUs
Financial market dynamics: superdiffusive or not?
NASA Astrophysics Data System (ADS)
Devi, Sandhya
2017-08-01
The behavior of stock market returns over a period of 1-60 d has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model. In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from 1 d returns, lie in the range 1.4-1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between -0.91 and -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between -1.25 and -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
ERIC Educational Resources Information Center
Kistner, Emily O.; Muller, Keith E.
2004-01-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact…
Lin, Guoxing
2016-11-21
Anomalous diffusion exists widely in polymer and biological systems. Pulsed-field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in nuclear magnetic resonance and magnetic resonance imaging. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, the exact signal attenuation expression including the finite gradient pulse width effect has not been obtained based on fractional derivatives for PFG anomalous diffusion. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional derivative model, both the probability density function (PDF) of a spin in real space and the PDF of the spin's accumulating phase shift in virtual phase space are MLP distributions. The MLP phase distribution leads to a Mittag-Leffler function based PFG signal attenuation, which differs significantly from the exponential attenuation for normal diffusion and from the stretched exponential attenuation for fractional diffusion based on the fractal derivative model. A complete signal attenuation expression E α (-D f b α,β * ) including the finite gradient pulse width effect was obtained and it can handle all three types of PFG fractional diffusions. The result was also extended in a straightforward way to give a signal attenuation expression of fractional diffusion in PFG intramolecular multiple quantum coherence experiments, which has an n β dependence upon the order of coherence which is different from the familiar n 2 dependence in normal diffusion. The results obtained in this study are in agreement with the results from the literature. The results in this paper provide a set of new, convenient approximation formalisms to interpret complex PFG fractional diffusion experiments.
Fisher information and asymptotic normality in system identification for quantum Markov chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guta, Madalin
2011-06-15
This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
NASA Astrophysics Data System (ADS)
Lylova, A. N.; Sheldakova, Yu. V.; Kudryashov, A. V.; Samarkin, V. V.
2018-01-01
We consider the methods for modelling doughnut and super-Gaussian intensity distributions in the far field by means of deformable bimorph mirrors. A method for the rapid formation of a specified intensity distribution using a Shack - Hartmann sensor is proposed, and the results of the modelling of doughnut and super-Gaussian intensity distributions are presented.
White Gaussian Noise - Models for Engineers
NASA Astrophysics Data System (ADS)
Jondral, Friedrich K.
2018-04-01
This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.
High-order space charge effects using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, M.F.; Bruhwiler, D.L.
1997-02-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic
YOKOYAMA, Jun’ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2015-03-01
Apertured averaged scintillation requires the evaluation of rather complicated irradiance covariance function. Here we develop a much simpler numerical method based on our earlier introduced semi-analytic approach. Using this method, we calculate aperture averaged scintillation of fully and partially coherent Gaussian, annular Gaussian flat topped and dark hollow beams. For comparison, the principles of equal source beam power and normalizing the aperture averaged scintillation with respect to received power are applied. Our results indicate that for fully coherent beams, upon adjusting the aperture sizes to capture 10 and 20% of the equal source power, Gaussian beam needs the largest aperture opening, yielding the lowest aperture average scintillation, whilst the opposite occurs for annular Gaussian and dark hollow beams. When assessed on the basis of received power normalized aperture averaged scintillation, fixed propagation distance and aperture size, annular Gaussian and dark hollow beams seem to have the lowest scintillation. Just like the case of point-like scintillation, partially coherent beams will offer less aperture averaged scintillation in comparison to fully coherent beams. But this performance improvement relies on larger aperture openings. Upon normalizing the aperture averaged scintillation with respect to received power, fully coherent beams become more advantageous than partially coherent ones.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Non-gaussianity versus nonlinearity of cosmological perturbations.
Verde, L
2001-06-01
Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Statistics and topology of the COBE differential microwave radiometer first-year sky maps
NASA Technical Reports Server (NTRS)
Smoot, G. F.; Tenorio, L.; Banday, A. J.; Kogut, A.; Wright, E. L.; Hinshaw, G.; Bennett, C. L.
1994-01-01
We use statistical and topological quantities to test the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) first-year sky maps against the hypothesis that the observed temperature fluctuations reflect Gaussian initial density perturbations with random phases. Recent papers discuss specific quantities as discriminators between Gaussian and non-Gaussian behavior, but the treatment of instrumental noise on the data is largely ignored. The presence of noise in the data biases many statistical quantities in a manner dependent on both the noise properties and the unknown cosmic microwave background temperature field. Appropriate weighting schemes can minimize this effect, but it cannot be completely eliminated. Analytic expressions are presented for these biases, and Monte Carlo simulations are used to assess the best strategy for determining cosmologically interesting information from noisy data. The genus is a robust discriminator that can be used to estimate the power-law quadrupole-normalized amplitude, Q(sub rms-PS), independently of the two-point correlation function. The genus of the DMR data is consistent with Gaussian initial fluctuations with Q(sub rms-PS) = (15.7 +/- 2.2) - (6.6 +/- 0.3)(n - 1) micro-K, where n is the power-law index. Fitting the rms temperature variations at various smoothing angles gives Q(sub rms-PS) = 13.2 +/- 2.5 micro-K and n = 1.7(sup (+0.3) sub (-0.6)). While consistent with Gaussian fluctuations, the first year data are only sufficient to rule out strongly non-Gaussian distributions of fluctuations.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Park, Subok; Gallas, Bradon D; Badano, Aldo; Petrick, Nicholas A; Myers, Kyle J
2007-04-01
A previous study [J. Opt. Soc. Am. A22, 3 (2005)] has shown that human efficiency for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds is approximately 4%. This human efficiency is much less than the reported 40% efficiency that has been documented for Gaussian-distributed lumpy backgrounds [J. Opt. Soc. Am. A16, 694 (1999) and J. Opt. Soc. Am. A18, 473 (2001)]. We conducted a psychophysical study with a number of changes, specifically in display-device calibration and data scaling, from the design of the aforementioned study. Human efficiency relative to the ideal observer was found again to be approximately 5%. Our variance analysis indicates that neither scaling nor display made a statistically significant difference in human performance for the task. We conclude that the non-Gaussian distributed lumpy background is a major factor in our low human-efficiency results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2010-06-15
In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.
Capacity of PPM on Gaussian and Webb Channels
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.; Hamkins, J.
2000-01-01
This paper computes and compares the capacities of M-ary PPM on various idealized channels that approximate the optical communication channel: (1) the standard additive white Gaussian noise (AWGN) channel;(2) a more general AWGN channel (AWGN2) allowing different variances in signal and noise slots;(3) a Webb-distributed channel (Webb2);(4) a Webb+Gaussian channel, modeling Gaussian thermal noise added to Webb-distributed channel outputs.
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong
2017-08-15
Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Hyperbolic and semi-parametric models in finance
NASA Astrophysics Data System (ADS)
Bingham, N. H.; Kiesel, Rüdiger
2001-02-01
The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Mulkern, Robert V; Balasubramanian, Mukund; Mitsouras, Dimitrios
2014-07-30
To determine whether Lorentzian or Gaussian intra-voxel frequency distributions are better suited for modeling data acquired with gradient-echo sampling of single spin-echoes for the simultaneous characterization of irreversible and reversible relaxation rates. Clinical studies (e.g., of brain iron deposition) using such acquisition schemes have typically assumed Lorentzian distributions. Theoretical expressions of the time-domain spin-echo signal for intra-voxel Lorentzian and Gaussian distributions were used to fit data from a human brain scanned at both 1.5 Tesla (T) and 3T, resulting in maps of irreversible and reversible relaxation rates for each model. The relative merits of the Lorentzian versus Gaussian model were compared by means of quality of fit considerations. Lorentzian fits were equivalent to Gaussian fits primarily in regions of the brain where irreversible relaxation dominated. In the multiple brain regions where reversible relaxation effects become prominent, however, Gaussian fits were clearly superior. The widespread assumption that a Lorentzian distribution is suitable for quantitative transverse relaxation studies of the brain should be reconsidered, particularly at 3T and higher field strengths as reversible relaxation effects become more prominent. Gaussian distributions offer alternate fits of experimental data that should prove quite useful in general. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xiang, Shao-Hua; Wen, Wei; Zhao, Yu-Jing; Song, Ke-Hui
2018-04-01
We study the properties of the cumulants of multimode boson operators and introduce the phase-averaged quadrature cumulants as the measure of the non-Gaussianity of multimode quantum states. Using this measure, we investigate the non-Gaussianity of two classes of two-mode non-Gaussian states: photon-number entangled states and entangled coherent states traveling in a bosonic memory quantum channel. We show that such a channel can skew the distribution of two-mode quadrature variables, giving rise to a strongly non-Gaussian correlation. In addition, we provide a criterion to determine whether the distributions of these states are super- or sub-Gaussian.
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
Truncated Gaussians as tolerance sets
NASA Technical Reports Server (NTRS)
Cozman, Fabio; Krotkov, Eric
1994-01-01
This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.
Stochastic space interval as a link between quantum randomness and macroscopic randomness?
NASA Astrophysics Data System (ADS)
Haug, Espen Gaarder; Hoff, Harald
2018-03-01
For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
Modified fundamental Airy wave.
Seshadri, S R
2014-01-01
The propagation characteristics of the fundamental Airy wave are obtained; the intensity distribution is the same as that for a point electric dipole situated at the origin and oriented normal to the propagation direction. The propagation characteristics of the modified fundamental Airy wave are determined. These characteristics are the same as those for the fundamental Gaussian wave provided that an equivalent waist is identified for the Airy wave. In general, the waves are localized spatially with the peak in the propagation direction.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid
2016-01-01
Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.
A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering
NASA Astrophysics Data System (ADS)
Milder, A. L.; Froula, D. H.
2017-10-01
A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2
Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio
NASA Astrophysics Data System (ADS)
Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.
2017-12-01
Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.
The distribution of cardiac troponin I in a population of healthy children: lessons for adults.
Koerbin, Gus; Potter, Julia M; Abhayaratna, Walter P; Telford, Richard D; Hickman, Peter E
2013-02-18
To describe the distribution of hs-cTnI in a large cohort of healthy children. As part of the LOOK study, blood was collected from a large cohort of healthy children on 3 separate occasions when the children were aged 8, 10 and 12years. Samples were stored at -80°C after collection and assayed after 1 freeze-thaw cycle using a pre-commercial release hs-cTnI assay from Abbott Diagnostics. More than 98% of the 12year-old children had cTnI above the LoD of 1.0ng/L. For the 212 boys the central 95% of results was distributed in a Gaussian fashion. For the 237 girls, the initial analysis was non-Gaussian, but after the elimination of 2 results, the pattern for girls was also Gaussian. In healthy children, cTnI is present in a Gaussian distribution. Even minor illnesses can cause some troponin release, distorting this Gaussian distribution. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
NASA Astrophysics Data System (ADS)
Magalhães, S.; Fialho, M.; Peres, M.; Lorenz, K.; Alves, E.
2016-04-01
In this work radial symmetric x-ray diffraction scans of Al0.15Ga0.85N thin films implanted with Tm ions were measured to determine the lattice deformation and crystal quality as functions of depth. The alloys were implanted with 300 keV Tm with 10° off-set to the sample normal to avoid channelling, with fluences varying between 1013 Tm cm-2 and 5 × 1015 Tm cm-2. Simulations of the radial 2θ-ω scans were performed under the frame of the dynamical theory of x-ray diffraction assuming Gaussian distributions of the lattice strain induced by implantation defects. The structure factor of the individual layers is multiplied by a static Debye-Waller factor in order to take into account the effect of lattice disorder due to implantation. For higher fluences two asymmetric Gaussians are required to describe well the experimental diffractograms, although a single asymmetric Gaussian profile for the deformation is found in the sample implanted with 1013 Tm cm-2. After thermal treatment at 1200 °C, the crystal quality partially recovers as seen in a reduction of the amplitude of the deformation maximum as well as the total thickness of the deformed layer. Furthermore, no evidence of changes with respect to the virgin crystal mosaicity is found after implantation and annealing.
NASA Astrophysics Data System (ADS)
Dong, Yijun
The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.
On the robustness of the q-Gaussian family
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio; Rodríguez, Antonio; Tsallis, Constantino
2015-12-01
We introduce three deformations, called α-, β- and γ-deformation respectively, of a N-body probabilistic model, first proposed by Rodríguez et al. (2008), having q-Gaussians as N → ∞ limiting probability distributions. The proposed α- and β-deformations are asymptotically scale-invariant, whereas the γ-deformation is not. We prove that, for both α- and β-deformations, the resulting deformed triangles still have q-Gaussians as limiting distributions, with a value of q independent (dependent) on the deformation parameter in the α-case (β-case). In contrast, the γ-case, where we have used the celebrated Q-numbers and the Gauss binomial coefficients, yields other limiting probability distribution functions, outside the q-Gaussian family. These results suggest that scale-invariance might play an important role regarding the robustness of the q-Gaussian family.
Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo
2011-01-01
Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
ERIC Educational Resources Information Center
Steinhauser, Marco; Hubner, Ronald
2009-01-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…
Hamby, D M
2002-01-01
Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.
Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data
NASA Astrophysics Data System (ADS)
Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.
2017-12-01
The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.
Quantitative genetic methods depending on the nature of the phenotypic trait.
de Villemereuil, Pierre
2018-01-24
A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
NASA Astrophysics Data System (ADS)
Zhang, Haili; Jiang, Zhijin; Li, Qingguang; Jiang, Guanxiang
2014-02-01
By using the revised Landau hydrodynamic model and taking into account the effect of leading particles, we discuss the pseudorapidity distributions of the charged particles produced in high-energy heavy-ion collisions. The leading particles are assumed to have the rapidity distributions with Gaussian forms with the normalization constant being equal to the number of participants, which can be figured out in theory. The results from the revised Landau hydrodynamic model, together with the contributions from leading particles, were found to be consistent with the experimental data obtained by the PHOBOS Collaboration on RHIC (Relativistic Heavy Ion Collider) at BNL (Brookhaven National Laboratory) in different centrality Cu+Cu and Au+Au collisions at high energies.
Laloš, Jernej; Babnik, Aleš; Možina, Janez; Požar, Tomaž
2016-03-01
The near-field, surface-displacement waveforms in plates are modeled using interwoven concepts of Green's function formalism and streamlined Huygens' principle. Green's functions resemble the building blocks of the sought displacement waveform, superimposed and weighted according to the simplified distribution. The approach incorporates an arbitrary circular spatial source distribution and an arbitrary circular spatial sensitivity in the area probed by the sensor. The displacement histories for uniform, Gaussian and annular normal-force source distributions and the uniform spatial sensor sensitivity are calculated, and the corresponding weight distributions are compared. To demonstrate the applicability of the developed scheme, measurements of laser ultrasound induced solely by the radiation pressure are compared with the calculated waveforms. The ultrasound is induced by laser pulse reflection from the mirror-surface of a glass plate. The measurements show excellent agreement not only with respect to various wave-arrivals but also in the shape of each arrival. Their shape depends on the beam profile of the excitation laser pulse and its corresponding spatial normal-force distribution. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wulff, J; Huggins, A
Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less
ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).
Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew
2016-09-01
This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
NASA Astrophysics Data System (ADS)
Lasuik, J.; Shalchi, A.
2018-06-01
In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.
Kang; Ih; Kim; Kim
2000-03-01
In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowie, L.L.; Hu, E.M.
1986-06-01
The velocities of 38 centrally positioned galaxies (r much less than 100 kpc) were measured relative to the velocity of the first-ranked galaxy in 14 rich clusters. Analysis of the velocity distribution function of this sample and of previous data shows that the population cannot be fit by a single Gaussian. An adequate fit is obtained if 60 percent of the objects lie in a Gaussian with sigma = 250 km/s and the remainder in a population with sigma = 1400 km/s. All previous data sets are individually consistent with this conclusion. This suggests that there is a bound populationmore » of galaxies in the potential well of the central galaxy in addition to the normal population of the cluster core. This is taken as supporting evidence for the galactic cannibalism model of cD galaxy formation. 14 references.« less
NASA Technical Reports Server (NTRS)
Cowie, L. L.; Hu, E. M.
1986-01-01
The velocities of 38 centrally positioned galaxies (r much less than 100 kpc) were measured relative to the velocity of the first-ranked galaxy in 14 rich clusters. Analysis of the velocity distribution function of this sample and of previous data shows that the population cannot be fit by a single Gaussian. An adequate fit is obtained if 60 percent of the objects lie in a Gaussian with sigma = 250 km/s and the remainder in a population with sigma = 1400 km/s. All previous data sets are individually consistent with this conclusion. This suggests that there is a bound population of galaxies in the potential well of the central galaxy in addition to the normal population of the cluster core. This is taken as supporting evidence for the galactic cannibalism model of cD galaxy formation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S., Juan Manuel Franco; Cywiak, Moises; Cywiak, David
2015-06-24
A homodyne profiler is used for recording the intensity distribution of focused non-truncated Gaussian beams. The spatial distributions are obtained at planes in the vicinity of the back-focal plane of a focusing lens placed at different distances from a He–Ne laser beam with a Gaussian intensity profile. Comparisons of the experimental data with those obtained from the analytical equations for an ideal focusing lens allow us to propose formulae to fine-tune the quadratic term in the Fresnel Gaussian shape invariant at each interface of the propagated field. Furthermore, we give analytical expressions to calculate adequately the propagation of the fieldmore » through an optical system.« less
NASA Astrophysics Data System (ADS)
Rosales-Zárate, L.; Teh, R. Y.; Opanchuk, B.; Reid, M. D.
2017-08-01
We consider three modes A , B , and C and derive monogamy inequalities that constrain the distribution of bipartite continuous variable Einstein-Podolsky-Rosen entanglement amongst the three modes. The inequalities hold without the assumption of Gaussian states, and are based on measurements of the quadrature phase amplitudes Xi and Pi at each mode i =A ,B ,C . The first monogamy inequality involves the well-known quantity DI J defined by Duan-Giedke-Cirac-Zoller as the sum of the variances of (XI-XJ)/2 and (PI+PJ)/2 where [XI,PJ] =δI J . Entanglement between I and J is certified if DI J<1 . A second monogamy inequality involves the more general entanglement certifier EntIJ defined as the normalized product of the variances of XI-g XJ and PI+g PJ , where g is a real constant. The monogamy inequalities give a lower bound on the values of DB C and EntBC for one pair, given the values DB A and EntBA for the first pair. This lower bound changes in the absence of two-mode Gaussian steering of B . We illustrate for a range of tripartite entangled states, identifying regimes of saturation of the inequalities. The monogamy relations explain without the assumption of Gaussianity the experimentally observed saturation at DA B=0.5 where there is symmetry between modes A and C .
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
The area of isodensity contours in cosmological models and galaxy surveys
NASA Technical Reports Server (NTRS)
Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.
Stress-induced electric current fluctuations in rocks: a superstatistical model
NASA Astrophysics Data System (ADS)
Cartwright-Taylor, Alexis; Vallianatos, Filippos; Sammonds, Peter
2017-04-01
We recorded spontaneous electric current flow in non-piezoelectric Carrara marble samples during triaxial deformation. Mechanical data, ultrasonic velocities and acoustic emissions were acquired simultaneously with electric current to constrain the relationship between electric current flow, differential stress and damage. Under strain-controlled loading, spontaneous electric current signals (nA) were generated and sustained under all conditions tested. In dry samples, a detectable electric current arises only during dilatancy and the overall signal is correlated with the damage induced by microcracking. Our results show that fracture plays a key role in the generation of electric currents in deforming rocks (Cartwright-Taylor et al., in prep). We also analysed the high-frequency fluctuations of these electric current signals and found that they are not normally distributed - they exhibit power-law tails (Cartwright-Taylor et al., 2014). We modelled these distributions with q-Gaussian statistics, derived by maximising the Tsallis entropy. This definition of entropy is particularly applicable to systems which are strongly correlated and far from equilibrium. Good agreement, at all experimental conditions, between the distributions of electric current fluctuations and the q-Gaussian function with q-values far from one, illustrates the highly correlated, fractal nature of the electric source network within the samples and provides further evidence that the source of the electric signals is the developing fractal network of cracks. It has been shown (Beck, 2001) that q-Gaussian distributions can arise from the superposition of local relaxations in the presence of a slowly varying driving force, thus providing a dynamic reason for the appearance of Tsallis statistics in systems with a fluctuating energy dissipation rate. So, the probability distribution for a dynamic variable, u under some external slow forcing, β, can be obtained as a superposition of temporary local equilibrium processes whose variance fluctuates over time. The appearance of q-Gaussian statistics are caused by the fluctuating β parameter, which effectively models the fluctuating energy dissipation rate in the system. This concept is known as superstatistics and is physically relevant for modelling driven non-equilibrium systems where the environmental conditions fluctuate on a large scale. The idea is that the environmental variable, such as temperature or pressure, changes so slowly that a rapidly fluctuating variable within that environment has time to relax back to equilibrium between each change in the environment. The application of superstatistical techniques to our experimental electric current fluctuations show that they can indeed be described, to good approximation, by the superposition of local Gaussian processes with fluctuating variance. We conclude, then, that the measured electric current fluctuates in response to intermittent energy dissipation and is driven to varying temporary local equilibria during deformation by the variations in stress intensity. The advantage of this technique is that, once the model has been established to be a good description of the system in question, the average β parameter (a measure of the average energy dissipation rate) for the system can be obtained simply from the macroscopic q-Gaussian distribution parameters.
NASA Astrophysics Data System (ADS)
Weitzen, J. A.; Bourque, S.; Ostergaard, J. C.; Bench, P. M.; Baily, A. D.
1991-04-01
Analysis of data from recent experiments leads to the observation that distributions of underdense meteor trail peak signal amplitudes differ from classic predictions. In this paper the distribution of trail amplitudes in decibels relative 1 W (dBw) is considered, and it is shown that Lindberg's theorem can be used to apply central limit arguments to this problem. It is illustrated that a Gaussian model for the distribution of the logarithm of the peak received signal level of underdense trails provides a better fit to data than classic approaches. Distributions of underdense meteor trail amplitudes at five frequencies are compared to a Gaussian distribution and the classic model. Implications of the Gaussian assumption on the design of communication systems are discussed.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Gyrator transform of Gaussian beams with phase difference and generation of hollow beam
NASA Astrophysics Data System (ADS)
Xiao, Zhiyu; Xia, Hui; Yu, Tao; Xie, Ding; Xie, Wenke
2018-03-01
The optical expression of Gaussian beams with phase difference, which is caused by gyrator transform (GT), has been obtained. The intensity and phase distribution of transform Gaussian beams are analyzed. It is found that the circular hollow vortex beam can be obtained by overlapping two GT Gaussian beams with π phase difference. The effect of parameters on the intensity and phase distributions of the hollow vortex beam are discussed. The results show that the shape of intensity distribution is significantly influenced by GT angle α and propagation distance z. The size of the hollow vortex beam can be adjusted by waist width ω 0. Compared with previously reported results, the work shows that the hollow vortex beam can be obtained without any model conversion of the light source.
Gyrator transform of Gaussian beams with phase difference and generation of hollow beam
NASA Astrophysics Data System (ADS)
Xiao, Zhiyu; Xia, Hui; Yu, Tao; Xie, Ding; Xie, Wenke
2018-06-01
The optical expression of Gaussian beams with phase difference, which is caused by gyrator transform (GT), has been obtained. The intensity and phase distribution of transform Gaussian beams are analyzed. It is found that the circular hollow vortex beam can be obtained by overlapping two GT Gaussian beams with π phase difference. The effect of parameters on the intensity and phase distributions of the hollow vortex beam are discussed. The results show that the shape of intensity distribution is significantly influenced by GT angle α and propagation distance z. The size of the hollow vortex beam can be adjusted by waist width ω 0. Compared with previously reported results, the work shows that the hollow vortex beam can be obtained without any model conversion of the light source.
New method for calculating the coupling coefficient in graded index optical fibers
NASA Astrophysics Data System (ADS)
Savović, Svetislav; Djordjevich, Alexandar
2018-05-01
A simple method is proposed for determining the mode coupling coefficient D in graded index multimode optical fibers. It only requires observation of the output modal power distribution P(m, z) for one fiber length z as the Gaussian launching modal power distribution changes, with the Gaussian input light distribution centered along the graded index optical fiber axis (θ0 = 0) without radial offset (r0 = 0). A similar method we previously proposed for calculating the coupling coefficient D in a step-index multimode optical fibers where the output angular power distributions P(θ, z) for one fiber length z with the Gaussian input light distribution launched centrally along the step-index optical fiber axis (θ0 = 0) is needed to be known.
NASA Astrophysics Data System (ADS)
Aygun, M.; Kucuk, Y.; Boztosun, I.; Ibraheem, Awad A.
2010-12-01
The elastic scattering angular distributions of 6He projectile on different medium and heavy mass target nuclei including 12C, 27Al, 58Ni, 64Zn, 65Cu, 197Au, 208Pb and 209Bi have been examined by using the few-body and Gaussian-shaped density distributions at various energies. The microscopic real parts of the complex nuclear optical potential have been obtained by using the double-folding model for each of the density distributions and the phenomenological imaginary potentials have been taken as the Woods-Saxon type. Comparative results of the few-body and Gaussian-shaped density distributions together with the experimental data are presented within the framework of the optical model.
NASA Astrophysics Data System (ADS)
Marrocco, Michele
2007-11-01
Fluorescence correlation spectroscopy is fundamental in many physical, chemical and biological studies of molecular diffusion. However, the concept of fluorescence correlation is founded on the assumption that the analytical description of the correlation decay of diffusion can be achieved if the spatial profile of the detected volume obeys a three-dimensional Gaussian distribution. In the present Letter, the analytical result is instead proven for the fundamental Gaussian-Lorentzian profile.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Kota, V K B; Chavda, N D; Sahu, R
2006-04-01
Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.
Elegant Ince-Gaussian beams in a quadratic-index medium
NASA Astrophysics Data System (ADS)
Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi
2011-09-01
Elegant Ince—Gaussian beams, which are the exact solutions of the paraxial wave equation in a quadratic-index medium, are derived in elliptical coordinates. These kinds of beams are the alternative form of standard Ince—Gaussian beams and they display better symmetry between the Ince-polynomials and the Gaussian function in mathematics. The transverse intensity distribution and the phase of the elegant Ince—Gaussian beams are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Clark, James E; Osborne, Jason W; Gallagher, Peter; Watson, Stuart
2016-07-01
Neuroendocrine data are typically positively skewed and rarely conform to the expectations of a Gaussian distribution. This can be a problem when attempting to analyse results within the framework of the general linear model, which relies on assumptions that residuals in the data are normally distributed. One frequently used method for handling violations of this assumption is to transform variables to bring residuals into closer alignment with assumptions (as residuals are not directly manipulated). This is often attempted through ad hoc traditional transformations such as square root, log and inverse. However, Box and Cox (Box & Cox, ) observed that these are all special cases of power transformations and proposed a more flexible method of transformation for researchers to optimise alignment with assumptions. The goal of this paper is to demonstrate the benefits of the infinitely flexible Box-Cox transformation on neuroendocrine data using syntax in spss. When applied to positively skewed data typical of neuroendocrine data, the majority (~2/3) of cases were brought into strict alignment with Gaussian distribution (i.e. a non-significant Shapiro-Wilks test). Those unable to meet this challenge showed substantial improvement in distributional properties. The biggest challenge was distributions with a high ratio of kurtosis to skewness. We discuss how these cases might be handled, and we highlight some of the broader issues associated with transformation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Normal modes and mode transformation of pure electron vortex beams
Thirunavukkarasu, G.; Mousley, M.; Babiker, M.
2017-01-01
Electron vortex beams constitute the first class of matter vortex beams which are currently routinely produced in the laboratory. Here, we briefly review the progress of this nascent field and put forward a natural quantum basis set which we show is suitable for the description of electron vortex beams. The normal modes are truncated Bessel beams (TBBs) defined in the aperture plane or the Fourier transform of the transverse structure of the TBBs (FT-TBBs) in the focal plane of a lens with the said aperture. As these modes are eigenfunctions of the axial orbital angular momentum operator, they can provide a complete description of the two-dimensional transverse distribution of the wave function of any electron vortex beam in such a system, in analogy with the prominent role Laguerre–Gaussian (LG) beams played in the description of optical vortex beams. The characteristics of the normal modes of TBBs and FT-TBBs are described, including the quantized orbital angular momentum (in terms of the winding number l) and the radial index p>0. We present the experimental realization of such beams using computer-generated holograms. The mode analysis can be carried out using astigmatic transformation optics, demonstrating close analogy with the astigmatic mode transformation between LG and Hermite–Gaussian beams. This article is part of the themed issue ‘Optical orbital angular momentum’. PMID:28069769
Normal modes and mode transformation of pure electron vortex beams.
Thirunavukkarasu, G; Mousley, M; Babiker, M; Yuan, J
2017-02-28
Electron vortex beams constitute the first class of matter vortex beams which are currently routinely produced in the laboratory. Here, we briefly review the progress of this nascent field and put forward a natural quantum basis set which we show is suitable for the description of electron vortex beams. The normal modes are truncated Bessel beams (TBBs) defined in the aperture plane or the Fourier transform of the transverse structure of the TBBs (FT-TBBs) in the focal plane of a lens with the said aperture. As these modes are eigenfunctions of the axial orbital angular momentum operator, they can provide a complete description of the two-dimensional transverse distribution of the wave function of any electron vortex beam in such a system, in analogy with the prominent role Laguerre-Gaussian (LG) beams played in the description of optical vortex beams. The characteristics of the normal modes of TBBs and FT-TBBs are described, including the quantized orbital angular momentum (in terms of the winding number l) and the radial index p>0. We present the experimental realization of such beams using computer-generated holograms. The mode analysis can be carried out using astigmatic transformation optics, demonstrating close analogy with the astigmatic mode transformation between LG and Hermite-Gaussian beams.This article is part of the themed issue 'Optical orbital angular momentum'. © 2017 The Author(s).
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Lodewyck, Jérôme; Debuisschert, Thierry; García-Patrón, Raúl; Tualle-Brouri, Rosa; Cerf, Nicolas J; Grangier, Philippe
2007-01-19
An intercept-resend attack on a continuous-variable quantum-key-distribution protocol is investigated experimentally. By varying the interception fraction, one can implement a family of attacks where the eavesdropper totally controls the channel parameters. In general, such attacks add excess noise in the channel, and may also result in non-Gaussian output distributions. We implement and characterize the measurements needed to detect these attacks, and evaluate experimentally the information rates available to the legitimate users and the eavesdropper. The results are consistent with the optimality of Gaussian attacks resulting from the security proofs.
The Gaussian Laser Angular Distribution in HYDRA's 3D Laser Ray Trace Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepke, Scott M.
In this note, the angular distribution of rays launched by the 3D LZR ray trace package is derived for Gaussian beams (npower==2) with bm model=3±. Beams with bm model=+3 have a nearly at distribution, and beams with bm model=-3 have a nearly linear distribution when the spot size is large compared to the wavelength.
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane
NASA Astrophysics Data System (ADS)
He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.
2016-05-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.
Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.
2016-06-07
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.
Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation
NASA Technical Reports Server (NTRS)
Steinwolf, Alexander; Rizzi, Stephen A.
2003-01-01
The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.
Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1992-01-01
In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
Lin, H-Y; Gau, S S-F; Huang-Gu, S L; Shang, C-Y; Wu, Y-H; Tseng, W-Y I
2014-06-01
Increased intra-individual variability (IIV) in reaction time (RT) across various tasks is one ubiquitous neuropsychological finding in attention deficit hyperactivity disorder (ADHD). However, neurobiological underpinnings of IIV in individuals with ADHD have not yet been fully delineated. The ex-Gaussian distribution has been proved to capture IIV in RT. The authors explored the three parameters [μ (mu), σ (sigma), τ (tau)] of an ex-Gaussian RT distribution derived from the Conners' continuous performance test (CCPT) and their correlations with the microstructural integrity of the frontostriatal-caudate tracts and the cingulum bundles. We assessed 28 youths with ADHD (8-17 years; 25 males) and 28 age-, sex-, IQ- and handedness-matched typically developing (TD) youths using the CCPT, Wechsler Intelligence Scale for Children, 3rd edition and magnetic resonance imaging (MRI). Microstructural integrity, indexed by generalized fractional anisotropy (GFA), was measured by diffusion spectrum imaging tractrography on a 3-T MRI system. Youths with ADHD had larger σ (s.d. of Gaussian distribution) and τ (mean of exponential distribution) and reduced GFA in four bilateral frontostriatal tracts. With increased inter-stimulus intervals of CCPT, the magnitude of greater τ in ADHD than TD increased. In ADHD youths, the cingulum bundles and frontostriatal integrity were associated with three ex-Gaussian parameters and with μ (mean of Gaussian distribution) and τ, respectively; while only frontostriatal GFA was associated with μ and τ in TD youths. Our findings suggest the crucial role of the integrity of the cingulum bundles in accounting for IIV in ADHD. Involvement of different brain systems in mediating IIV may relate to a distinctive pathophysiological processing and/or adaptive compensatory mechanism.
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
NASA Astrophysics Data System (ADS)
Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.
2017-08-01
A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Statistical properties of two sine waves in Gaussian noise.
NASA Technical Reports Server (NTRS)
Esposito, R.; Wilson, L. R.
1973-01-01
A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Excited-state thermionic emission in III-antimonides: Low emittance ultrafast photocathodes
NASA Astrophysics Data System (ADS)
Berger, Joel A.; Rickman, B. L.; Li, T.; Nicholls, A. W.; Andreas Schroeder, W.
2012-11-01
The normalized rms transverse emittance of an electron source is shown to be proportional to √m* , where m* is the effective mass of the state from which the electron is emitted, by direct observation of the transverse momentum distribution for excited-state thermionic emission from two III-V semiconductor photocathodes, GaSb and InSb, together with a control experiment employing two-photon emission from gold. Simulations of the experiment using an extended analytical Gaussian model of electron pulse propagation are in close agreement with the data.
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Experimental Evidence for a Structural-Dynamical Transition in Trajectory Space.
Pinchaipat, Rattachai; Campo, Matteo; Turci, Francesco; Hallett, James E; Speck, Thomas; Royall, C Patrick
2017-07-14
Among the key insights into the glass transition has been the identification of a nonequilibrium phase transition in trajectory space which reveals phase coexistence between the normal supercooled liquid (active phase) and a glassy state (inactive phase). Here, we present evidence that such a transition occurs in experiments. In colloidal hard spheres, we find a non-Gaussian distribution of trajectories leaning towards those rich in locally favored structures (LFSs), associated with the emergence of slow dynamics. This we interpret as evidence for a nonequilibrium transition to an inactive LFS-rich phase. Reweighting trajectories reveals a first-order phase transition in trajectory space between a normal liquid and a LFS-rich phase. We also find evidence for a purely dynamical transition in trajectory space.
Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.
Sun, Dong; Zhao, Daomu
2005-08-01
By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.
A non-gaussian model of continuous atmospheric turbulence for use in aircraft design
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Joppa, R. G.; Ganzer, V. M.
1976-01-01
A non-Gaussian model of atmospheric turbulence is presented and analyzed. The model is restricted to the regions of the atmosphere where the turbulence is steady or continuous, and the assumptions of homogeneity and stationarity are justified. Also spatial distribution of turbulence is neglected, so the model consists of three independent, stationary stochastic processes which represent the vertical, lateral, and longitudinal gust components. The non-Gaussian and Gaussian models are compared with experimental data, and it is shown that the Gaussian model underestimates the number of high velocity gusts which occur in the atmosphere, while the non-Gaussian model can be adjusted to match the observed high velocity gusts more satisfactorily. Application of the proposed model to aircraft response is investigated, with particular attention to the response power spectral density, the probability distribution, and the level crossing frequency. A numerical example is presented which illustrates the application of the non-Gaussian model to the study of an aircraft autopilot system. Listings and sample results of a number of computer programs used in working with the model are included.
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Souri, S; Gill, G
Purpose: To statistically determine the optimal tolerance level in the verification of delivery dose compared to the planned dose in an in vivo dosimetry system in radiotherapy. Methods: The LANDAUER MicroSTARii dosimetry system with screened nanoDots (optically stimulated luminescence dosimeters) was used for in vivo dose measurements. Ideally, the measured dose should match with the planned dose and falls within a normal distribution. Any deviation from the normal distribution may be redeemed as a mismatch, therefore a potential sign of the dose misadministration. Randomly mis-positioned nanoDots can yield a continuum background distribution. A percentage difference of the measured dose tomore » its corresponding planned dose (ΔD) can be used to analyze combined data sets for different patients. A model of a Gaussian plus a flat function was used to fit the ΔD distribution. Results: Total 434 nanoDot measurements for breast cancer patients were collected across a period of three months. The fit yields a Gaussian mean of 2.9% and a standard deviation (SD) of 5.3%. The observed shift of the mean from zero is attributed to the machine output bias and calibration of the dosimetry system. A pass interval of −2SD to +2SD was applied and a mismatch background was estimated to be 4.8%. With such a tolerance level, one can expect that 99.99% of patients should pass the verification and at most 0.011% might have a potential dose misadministration that may not be detected after 3 times of repeated measurements. After implementation, a number of new start breast cancer patients were monitored and the measured pass rate is consistent with the model prediction. Conclusion: It is feasible to implement an optimal tolerance level in order to maintain a low limit of potential dose misadministration while still to keep a relatively high pass rate in radiotherapy delivery verification.« less
Steinhauser, Marco; Hübner, Ronald
2009-10-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Revealing nonclassicality beyond Gaussian states via a single marginal distribution
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-01
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential—a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis–independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement. PMID:28077456
Revealing nonclassicality beyond Gaussian states via a single marginal distribution.
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-31
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential-a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis-independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement.
Model for non-Gaussian intraday stock returns
NASA Astrophysics Data System (ADS)
Gerig, Austin; Vicente, Javier; Fuentes, Miguel A.
2009-12-01
Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.
Payne, Brennan R; Stine-Morrow, Elizabeth A L
2014-06-01
We report a secondary data analysis investigating age differences in the effects of clause and sentence wrap-up on reading time distributions during sentence comprehension. Residual word-by-word self-paced reading times were fit to the ex-Gaussian distribution to examine age differences in the effects of clause and sentence wrap-up on both the location and shape of participants' reaction time (RT) distributions. The ex-Gaussian distribution showed good fit to the data in both younger and older adults. Sentence wrap-up increased the central tendency, the variability, and the tail of the distribution, and these effects were exaggerated among the old. In contrast, clause wrap-up influenced the tail of the distribution only, and did so differentially for older adults. Effects were confirmed via nonparametric vincentile plots. Individual differences in visual acuity, working memory, speed of processing, and verbal ability were differentially related to ex-Gaussian parameters reflecting wrap-up effects on underlying reading time distributions. These findings argue against simple pause mechanisms to explain end-of-clause and end-of-sentence reading time patterns; rather, the findings are consistent with a cognitively effortful view of wrap-up and suggest that age and individual differences in attentional allocation to semantic integration during reading, as revealed by RT distribution analyses, play an important role in sentence understanding. PsycINFO Database Record (c) 2014 APA, all rights reserved.
The analysis of ensembles of moderately saturated interstellar lines
NASA Technical Reports Server (NTRS)
Jenkins, E. B.
1986-01-01
It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.
Moutsopoulou, Karolina; Waszak, Florian
2012-04-01
The differential effects of task and response conflict in priming paradigms where associations are strengthened between a stimulus, a task, and a response have been demonstrated in recent years with neuroimaging methods. However, such effects are not easily disentangled with only measurements of behavior, such as reaction times (RTs). Here, we report the application of ex-Gaussian distribution analysis on task-switching RT data and show that conflict related to stimulus-response associations retrieved after a switch of tasks is reflected in the Gaussian component. By contrast, conflict related to the retrieval of stimulus-task associations is reflected in the exponential component. Our data confirm that the retrieval of stimulus-task and -response associations affects behavior differently. Ex-Gaussian distribution analysis is a useful tool for pulling apart these different levels of associative priming that are not distinguishable in analyses of RT means.
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
NASA Astrophysics Data System (ADS)
Yeung, Chuck
2018-06-01
The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Local Gaussian operations can enhance continuous-variable entanglement distillation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Shengli; Loock, Peter van; Institute of Theoretical Physics I, Universitaet Erlangen-Nuernberg, Staudtstrasse 7/B2, DE-91058 Erlangen
2011-12-15
Entanglement distillation is a fundamental building block in long-distance quantum communication. Though known to be useless on their own for distilling Gaussian entangled states, local Gaussian operations may still help to improve non-Gaussian entanglement distillation schemes. Here we show that by applying local squeezing operations both the performance and the efficiency of existing distillation protocols can be enhanced. We find that such an enhancement through local Gaussian unitaries can be obtained even when the initially shared Gaussian entangled states are mixed, as, for instance, after their distribution through a lossy-fiber communication channel.
Nongaussian distribution curve of heterophorias among children.
Letourneau, J E; Giroux, R
1991-02-01
The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.
An alternative to the breeder's and Lande's equations.
Houchmandzadeh, Bahram
2014-01-10
The breeder's equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) - E(Z0) to the selection differential S = E(ZW) - E(Z0) through a simple proportionality relation R = h(2)S, where the heritability coefficient h(2) is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R' = j(2)S' can be derived, regardless of the parental genotype distribution. Here R' = E(Z1) - μ and S' = E(ZW) - μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j(2) is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande's equation Δ(z) = GP(-1)S. The linearity coefficient of the alternative equation are not changed by Gaussian selection.
Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States.
Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi
2017-06-09
Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.
Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States
NASA Astrophysics Data System (ADS)
Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi
2017-06-01
Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Borland, M.
Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less
Off-normal deposition of PTFE thin films during 157-nm irradiation
NASA Astrophysics Data System (ADS)
George, Sharon R.; Langford, Stephen C.; Dickinson, J. Thomas
2010-03-01
Polytetrafluoroethylene (PTFE) is valued for its chemical stability, low surface energy, and insulating properties. The ablation of PTFE by F2 excimer lasers (157 nm photons) involves photochemical scission of C-C bonds along the polymer chain. Depending on the fluence, the fragment masses can range from 50 to 2000 amu. Gaussian beam profiles allow for the production of spatially non-uniform distributions of fragment masses, with the lighter fragments concentrated in the center of the laser spot. The resulting trajectories for the light fragments can be strongly forward directed, while the heavy fragments are directed more to the side, well away from the surface normal. We present experimental evidence for these angular distributions, and numerically simulate this behavior with a simple, two-component hydrodynamic model. Under the conditions of our work, most of the ablated mass appears as heavier fragments and can be collected on substrates mounted to the sides or above and below the laser spot. This geometry may have advantages in some applications of pulsed laser deposition.
Distilling Gaussian states with Gaussian operations is impossible.
Eisert, J; Scheel, S; Plenio, M B
2002-09-23
We show that no distillation protocol for Gaussian quantum states exists that relies on (i) arbitrary local unitary operations that preserve the Gaussian character of the state and (ii) homodyne detection together with classical communication and postprocessing by means of local Gaussian unitary operations on two symmetric identically prepared copies. This is in contrast to the finite-dimensional case, where entanglement can be distilled in an iterative protocol using two copies at a time. The ramifications for the distribution of Gaussian states over large distances will be outlined. We also comment on the generality of the approach and sketch the most general form of a Gaussian local operation with classical communication in a bipartite setting.
Herd behaviors in the stock and foreign exchange markets
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Yoon, Seong-Min; Kim, Yup
2004-10-01
The herd behavior of returns for the won-dollar exchange rate and the Korean stock price index (KOSPI) is analyzed in Korean financial markets. It is reported that the probability distribution P( R) of returns R for three types of herding parameter satisfies the power-law behavior P( R)≃ R- β with the exponents β=2.2 (the won-dollar exchange rate) and 2.4 (the KOSPI). When the herding parameter h satisfies h⩾2.33, the crash regime in which P( R) increases with the increasing R appears. The active state of the transaction exists to decrease for h>2.33. Especially, we find that the distribution of normalized returns shows a crossover to a Gaussian distribution when the time step Δ t=252 is used. Our results will also be compared to the other well-known analyses.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2011-04-15
In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
NASA Astrophysics Data System (ADS)
Wolfsteiner, Peter; Breuer, Werner
2013-10-01
The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Chen, Tianju; Zhang, Jinzhi; Wu, Jinhu
2016-07-01
The kinetic and energy productions of pyrolysis of a lignocellulosic biomass were investigated using a three-parallel Gaussian distribution method in this work. The pyrolysis experiment of the pine sawdust was performed using a thermogravimetric-mass spectroscopy (TG-MS) analyzer. A three-parallel Gaussian distributed activation energy model (DAEM)-reaction model was used to describe thermal decomposition behaviors of the three components, hemicellulose, cellulose and lignin. The first, second and third pseudocomponents represent the fractions of hemicellulose, cellulose and lignin, respectively. It was found that the model is capable of predicting the pyrolysis behavior of the pine sawdust. The activation energy distribution peaks for the three pseudo-components were centered at 186.8, 197.5 and 203.9kJmol(-1) for the pine sawdust, respectively. The evolution profiles of H2, CH4, CO, and CO2 were well predicted using the three-parallel Gaussian distribution model. In addition, the chemical composition of bio-oil was also obtained by pyrolysis-gas chromatography/mass spectrometry instrument (Py-GC/MS). Copyright © 2016 Elsevier Ltd. All rights reserved.
Propagation of Ince-Gaussian beams in a thermal lens medium
NASA Astrophysics Data System (ADS)
Xu, Ting; Wang, Shaomin
2006-09-01
The propagation of Ince-Gaussian beams in a thermal lens medium is studied in this paper. Based on the ABCD matrix for Gaussian beams passing through a thermal lens medium, distinct expressions for the beam transverse intensity distributions and the longitudinal phase shift are deduced and discussed. Similar to Laguerre and Hermite-Gaussian beams, Ince-Gaussian beams, which constitute the third complete family of exact and orthogonal solutions of the paraxial wave equation, can also be used in other inhomogeneous media such as lenslike media and saturated absorption media.
Ndagano, Bienvenu; Mphuthi, Nokwazi; Milione, Giovanni; Forbes, Andrew
2017-10-15
There is interest in using orbital angular momentum (OAM) modes to increase the data speed of free-space optical communication. A prevalent challenge is the mitigation of mode-crosstalk and mode-dependent loss that is caused by the modes' lateral displacement at the data receiver. Here, the mode-crosstalk and mode-dependent loss of laterally displaced OAM modes (LG 0,+1 , LG 0,-1 ) are experimentally compared to that of a Hermite-Gaussian (HG) mode subset (HG 0,1 , HG 1,0 ). It is shown, for an aperture larger than the modes' waist sizes, some of the HG modes can experience less mode-crosstalk and mode-dependent loss when laterally displaced along a symmetry axis. It is also shown, over a normal distribution of lateral displacements whose standard deviation is 2× the modes' waist sizes, on average, the HG modes experience 66% less mode-crosstalk and 17% less mode-dependent loss.
NASA Astrophysics Data System (ADS)
Xu, Yonggen; Tian, Huanhuan; Dan, Youquan; Feng, Hao; Wang, Shijian
2017-04-01
Propagation formulae for M2-factor and beam wander of partially coherent electromagnetic hollow Gaussian (PCEHG) beam in non-Kolmogorov turbulence are derived based on the extended Huygens-Fresnel principle and the second-order moments of the Wigner distribution function. Our results indicate that the normalized M2-factors of PCEHG beam with larger beam order, waist width, inner scale of turbulence, the generalized exponent parameter, and smaller transverse coherent widths, outer scale of turbulence, the generalized structure parameter are less affected by the turbulence. The root mean square beam wander and relative beam wander are more obvious for PCEHG beam with smaller beam order, larger inner and outer scales of turbulence, exponent parameter, transverse coherent widths, and the generalized structure parameter. What is more, the beam wander properties of PCEHG beam in non-Kolmogorov turbulence are very different from M2-factor and spreading properties of beam in turbulence.
Comment on "Universal relation between skewness and kurtosis in complex dynamics"
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2015-12-01
In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.
Casault, Sébastien; Groen, Aard J; Linton, Jonathan D
2014-03-25
This paper presents work toward improving the efficacy of financial models that describe the unique nature of biotechnology firms. We show that using a 'thick tailed' power law distribution to describe the behavior of the value of biotechnology R&D used in a Real Options Pricing model is significantly more accurate than the traditionally used Gaussian approach. A study of 287 North-American biotechnology firms gives insights into common problems faced by investors, managers and other stakeholders when using traditional techniques to calculate the commercial value of R&D. This is important because specific quantitative tools to assess the value of high-risk, high-reward R&D do not currently exist. This often leads to an undervaluation of biotechnology R&D and R&D intensive biotechnology firms. For example, the widely used Net Present Value (NPV) method assumes a fixed risk ignoring management flexibility and the changing environment. However, Real Options Pricing models assume that commercial returns from R&D investments are described by a normal random walk. A normal random walk model eliminates the possibility of drastic changes to the marketplace resulting from the introduction of revolutionary products and/or services. It is possible to better understand and manage biotechnology research projects and portfolios using a model that more accurately considers large non-Gaussian price fluctuations with thick tails, which recognize the unusually large risks and opportunities associated with Biotechnology R&D. Our empirical data show that opportunity overcompensates for the downside risk making biotechnology R&D statistically more valuable than other Gaussian options investments, which may otherwise appear to offer a similar combination of risk and return. Copyright © 2013 Elsevier B.V. All rights reserved.
Monogamy inequality for distributed gaussian entanglement.
Hiroshima, Tohya; Adesso, Gerardo; Illuminati, Fabrizio
2007-02-02
We show that for all n-mode Gaussian states of continuous variable systems, the entanglement shared among n parties exhibits the fundamental monogamy property. The monogamy inequality is proven by introducing the Gaussian tangle, an entanglement monotone under Gaussian local operations and classical communication, which is defined in terms of the squared negativity in complete analogy with the case of n-qubit systems. Our results elucidate the structure of quantum correlations in many-body harmonic lattice systems.
Optimality of Gaussian attacks in continuous-variable quantum cryptography.
Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio
2006-11-10
We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
Voice-onset time and buzz-onset time identification: A ROC analysis
NASA Astrophysics Data System (ADS)
Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.
2004-05-01
Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.
Experimental quantum cryptography with qutrits
NASA Astrophysics Data System (ADS)
Gröblacher, Simon; Jennewein, Thomas; Vaziri, Alipasha; Weihs, Gregor; Zeilinger, Anton
2006-05-01
We produce two identical keys using, for the first time, entangled trinary quantum systems (qutrits) for quantum key distribution. The advantage of qutrits over the normally used binary quantum systems is an increased coding density and a higher security margin. The qutrits are encoded into the orbital angular momentum of photons, namely Laguerre Gaussian modes with azimuthal index l + 1, 0 and -1, respectively. The orbital angular momentum is controlled with phase holograms. In an Ekert-type protocol the violation of a three-dimensional Bell inequality verifies the security of the generated keys. A key is obtained with a qutrit error rate of approximately 10%.
Central Limit Theorems for Linear Statistics of Heavy Tailed Random Matrices
NASA Astrophysics Data System (ADS)
Benaych-Georges, Florent; Guionnet, Alice; Male, Camille
2014-07-01
We show central limit theorems (CLT) for the linear statistics of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of α-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Time-frequency distributions for propulsion-system diagnostics
NASA Astrophysics Data System (ADS)
Griffin, Michael E.; Tulpule, Sharayu
1991-12-01
The Wigner distribution and its smoothed versions, i.e., Choi-Williams and Gaussian kernels, are evaluated for propulsion system diagnostics. The approach is intended for off-line kernel design by using the ambiguity domain to select the appropriate Gaussian kernel. The features produced by the Wigner distribution and its smoothed versions correlate remarkably well with documented failure indications. The selection of the kernel on the other hand is very subjective for our unstructured data.
Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield
2014-01-01
Two important wood properties are the modulus of elasticity (MOE) and the modulus of rupture (MOR). In the past, the statistical distribution of the MOE has often been modeled as Gaussian, and that of the MOR as lognormal or as a two- or three-parameter Weibull distribution. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior...
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
NASA Astrophysics Data System (ADS)
Nie, Yongming; Li, Xiujian; Qi, Junli; Ma, Haotong; Liao, Jiali; Yang, Jiankun; Hu, Wenhua
2012-03-01
Based on the refractive beam shaping system, the transformation of a quasi-Gaussian beam into a dark hollow Gaussian beam by a phase-only liquid crystal spatial light modulator (LC-SLM) is proposed. According to the energy conservation and constant optical path principle, the phase distribution of the aspheric lens and the phase-only LC-SLM can modulate the wave-front properly to generate the hollow beam. The numerical simulation results indicate that, the dark hollow intensity distribution of the output shaped beam can be maintained well for a certain propagation distance during which the dark region will not decrease whereas the ideal hollow Gaussian beam will do. By designing the phase modulation profile, which loaded into the LC-SLM carefully, the experimental results indicate that the dark hollow intensity distribution of the output shaped beam can be maintained well even at a distance much more than 550 mm from the LC-SLM, which agree with the numerical simulation results.
NASA Astrophysics Data System (ADS)
Malyutin, A. A.
2007-03-01
Modes of a laser with plano-spherical degenerate and nondegenerate resonators are calculated upon diode pumping producing the Gaussian gain distribution in the active medium. Axially symmetric and off-axis pumpings are considered. It is shown that in the first case the lowest Hermite-Gaussian mode is excited with the largest weight both in the degenerate and nondegenerate resonator if the pump level is sufficiently high or the characteristic size wg of the amplifying region greatly exceeds the mode radius w0. The high-order Ince-Gaussian modes are excited upon weak off-axis pumping in the nondegenerate resonator both in the absence and presence of the symmetry of the gain distribution with respect to the resonator axis. It is found that when the level of off-axis symmetric pumping of the resonator is high enough, modes with the parameters of the TEM00 mode periodically propagating over a closed path in the resonator can exist. The explanation of this effect is given.
Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks.
Chabanon, Morgan; Rangamani, Padmini
2018-03-28
Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.
Remote sensing of earth terrain
NASA Technical Reports Server (NTRS)
Kong, J. A.
1988-01-01
Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.
Second order Pseudo-gaussian shaper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beche, Jean-Francois
2002-11-22
The purpose of this document is to provide a calculus spreadsheet for the design of second-order pseudo-gaussian shapers. A very interesting reference is given by C.H. Mosher ''Pseudo-Gaussian Transfer Functions with Superlative Recovery'', IEEE TNS Volume 23, p. 226-228 (1976). Fred Goulding and Don Landis have studied the structure of those filters and their implementation and this document will outline the calculation leading to the relation between the coefficients of the filter. The general equation of the second order pseudo-gaussian filter is: f(t) = P{sub 0} {center_dot} e{sup -3kt} {center_dot} sin{sup 2}(kt). The parameter k is a normalization factor.
Not Normal: the uncertainties of scientific measurements
NASA Astrophysics Data System (ADS)
Bailey, David C.
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.
Not Normal: the uncertainties of scientific measurements
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply. PMID:28280557
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
Non-gaussian statistics of pencil beam surveys
NASA Technical Reports Server (NTRS)
Amendola, Luca
1994-01-01
We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.
Irradiance tailoring by fractional Fourier transform of a radial Gaussian beam array
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Ma, Yanxing; Ma, Haotong; Liu, Zejin
2011-03-01
The fractional Fourier transform (FRFT) is applied to a radial Gaussian beam array. Analytical formula is derived for the irradiance distribution of coherent and incoherent radial Gaussian beam array in FRFT domain using Collins integral formula. It is revealed that the irradiance pattern can be tailored to be controllable dark-hollow, flat-topped and Gaussian beam pattern by changing of the fractional order of FRFT and the coherent state of the laser array.
Irradiance tailoring by fractional Fourier transform of a radial Gaussian beam array
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Ma, Yanxing; Ma, Haotong; Liu, Zejin
2010-07-01
The fractional Fourier transform (FRFT) is applied to a radial Gaussian beam array. Analytical formula is derived for the irradiance distribution of coherent and incoherent radial Gaussian beam array in FRFT domain using Collins integral formula. It is revealed that the irradiance pattern can be tailored to be controllable dark-hollow, flat-topped and Gaussian beam pattern by changing of the fractional order of FRFT and the coherent state of the laser array.
Liu, Chengyu; Zhao, Lina; Liu, Changchun
2014-01-01
An early return of the reflected component in the arterial pulse has been recognized as an important indicator of cardiovascular risk. This study aimed to determine the effects of blood pressure and sex factor on the change of wave reflection using Gaussian fitting method. One hundred and ninety subjects were enrolled. They were classified into four blood pressure categories based on the systolic blood pressures (i.e., ≤ 110, 111-120, 121-130 and ≥ 131 mmHg). Each blood pressure category was also stratified for sex factor. Electrocardiogram (ECG) and radial artery pressure waveforms (RAPW) signals were recorded for each subject. Ten consecutive pulse episodes from the RAPW signal were extracted and normalized. Each normalized pulse episode was fitted by three Gaussian functions. Both the peak position and peak height of the first and second Gaussian functions, as well as the peak position interval and peak height ratio, were used as the evaluation indices of wave reflection. Two-way ANOVA results showed that with the increased blood pressure, the peak position of the second Gaussian significantly shorten (P < 0.01), the peak height of the first Gaussian significantly decreased (P < 0.01) and the peak height of the second Gaussian significantly increased (P < 0.01), inducing the significantly decreased peak position interval and significantly increased peak height ratio (both P < 0.01). Sex factor had no significant effect on all evaluation indices (all P > 0.05). Moreover, the interaction between sex and blood pressure factors also had no significant effect on all evaluation indices (all P > 0.05). These results showed that blood pressure has significant effect on the change of wave reflection when using the recently developed Gaussian fitting method, whereas sex has no significant effect. The results also suggested that the Gaussian fitting method could be used as a new approach for assessing the arterial wave reflection.
Coherent superposition of propagation-invariant laser beams
NASA Astrophysics Data System (ADS)
Soskind, R.; Soskind, M.; Soskind, Y. G.
2012-10-01
The coherent superposition of propagation-invariant laser beams represents an important beam-shaping technique, and results in new beam shapes which retain the unique property of propagation invariance. Propagation-invariant laser beam shapes depend on the order of the propagating beam, and include Hermite-Gaussian and Laguerre-Gaussian beams, as well as the recently introduced Ince-Gaussian beams which additionally depend on the beam ellipticity parameter. While the superposition of Hermite-Gaussian and Laguerre-Gaussian beams has been discussed in the past, the coherent superposition of Ince-Gaussian laser beams has not received significant attention in literature. In this paper, we present the formation of propagation-invariant laser beams based on the coherent superposition of Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian beams of different orders. We also show the resulting field distributions of the superimposed Ince-Gaussian laser beams as a function of the ellipticity parameter. By changing the beam ellipticity parameter, we compare the various shapes of the superimposed propagation-invariant laser beams transitioning from Laguerre-Gaussian beams at one ellipticity extreme to Hermite-Gaussian beams at the other extreme.
SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livadiotis, G., E-mail: glivadiotis@swri.edu
2016-03-15
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
Superposition of Polytropes in the Inner Heliosheath
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2016-03-01
This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Mean intensity of the vortex Bessel-Gaussian beam in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Lukin, Igor P.
2017-11-01
In this work the question of stability of the vortex Bessel-Gaussian optical beams formed in turbulent atmosphere is theoretically considered. The detailed analysis of features of spatial structure of distribution of mean intensity of vortex Bessel-Gaussian optical beams in turbulent atmosphere are analyzed. The quantitative criterion of possibility of formation of vortex Bessel-Gaussian optical beams in turbulent atmosphere is derived. It is shown that stability of the form of a vortex Bessel-Gaussian optical beam during propagation in turbulent atmosphere increases with increase of value of a topological charge of this optical beam.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Leading non-Gaussian corrections for diffusion orientation distribution function.
Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali
2014-02-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.
Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function
Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali
2014-01-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143
Normal form decomposition for Gaussian-to-Gaussian superoperators
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Palma, Giacomo; INFN, Pisa; Mari, Andrea
2015-05-15
In this paper, we explore the set of linear maps sending the set of quantum Gaussian states into itself. These maps are in general not positive, a feature which can be exploited as a test to check whether a given quantum state belongs to the convex hull of Gaussian states (if one of the considered maps sends it into a non-positive operator, the above state is certified not to belong to the set). Generalizing a result known to be valid under the assumption of complete positivity, we provide a characterization of these Gaussian-to-Gaussian (not necessarily positive) superoperators in terms ofmore » their action on the characteristic function of the inputs. For the special case of one-mode mappings, we also show that any Gaussian-to-Gaussian superoperator can be expressed as a concatenation of a phase-space dilatation, followed by the action of a completely positive Gaussian channel, possibly composed with a transposition. While a similar decomposition is shown to fail in the multi-mode scenario, we prove that it still holds at least under the further hypothesis of homogeneous action on the covariance matrix.« less
NASA Astrophysics Data System (ADS)
Vlasov, M. N.; Kelley, M. C.; Hysell, D. L.
2013-06-01
Enhanced optical emissions observed during HF pumping are induced by electrons accelerated by high-power electromagnetic waves. Using measured emission intensities, the energy distribution of accelerated electrons can be inferred. Energy loss from the excitation of molecular nitrogen vibrational levels (the vibrational barrier) strongly influences the electron energy distribution (EED). In airglow calculations, compensation for electron depletion within the 2-3 eV energy range, induced by the vibrational barrier, can be achieved via electrons with an EED similar to a Gaussian distribution and energies higher than 3 eV. This EED has a peak within the 5-10 eV energy range. We show that the main EED features depend strongly on altitude and solar activity. An EED similar to a power law distribution can occur above 270-300 km altitude. Below 270 km altitude, a Gaussian distribution for energies between 3 eV and 10 eV, together with a power law distribution for energies higher than 10 eV, is indicated. A Gaussian distribution combined with an exponential function is needed below 230 km altitude. The transition altitude from Gaussian to power law distribution depends strongly on solar activity, increasing for high solar activity. Electrons accelerated during the initial collisionless stage can inhibit the depletion of fast electrons within the vibrational barrier range, an effect that strongly depends on altitude and solar activity. The approach, based on the effective root square electric field, enables EED calculation, providing the observed red-line intensities for low and high solar activities.
A combined model for pseudo-rapidity distributions in Cu-Cu collisions at BNL-RHIC energies
NASA Astrophysics Data System (ADS)
Jiang, Z. J.; Wang, J.; Huang, Y.
2016-04-01
The charged particles produced in nucleus-nucleus collisions come from leading particles and those frozen out from the hot and dense matter created in collisions. The leading particles are conventionally supposed having Gaussian rapidity distributions normalized to the number of participants. The hot and dense matter is assumed to expand according to the unified hydrodynamics, a hydro model which unifies the features of Landau and Hwa-Bjorken model, and freeze out into charged particles from a time-like hypersurface with a proper time of τFO. The rapidity distribution of this part of charged particles can be derived analytically. The combined contribution from both leading particles and unified hydrodynamics is then compared against the experimental data performed by BNL-RHIC-PHOBOS Collaboration in different centrality Cu-Cu collisions at sNN = 200 and 62.4GeV, respectively. The model predictions are consistent with experimental measurements.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
Event-by-event fluctuations in the elliptic-flow coefficientmore » $$v_2$$ are studied in PbPb collisions at $$\\sqrt{s_{_\\text{NN}}} = 5.02$$ TeV using the CMS detector at the CERN LHC. Elliptic-flow probability distributions $${p}(v_2)$$ for charged particles with transverse momentum 0.3$$< p_\\mathrm{T} <$$3.0 GeV and pseudorapidity $$| \\eta | <$$ 1.0 are determined for different collision centrality classes. The moments of the $${p}(v_2)$$ distributions are used to calculate the $$v_{2}$$ coefficients based on cumulant orders 2, 4, 6, and 8. A rank ordering of the higher-order cumulant results and nonzero standardized skewness values obtained for the $${p}(v_2)$$ distributions indicate non-Gaussian initial-state fluctuation behavior. Bessel-Gaussian and elliptic power fits to the flow distributions are studied to characterize the initial-state spatial anisotropy.« less
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
Poisson–Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-01-01
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods. PMID:29596335
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
An Alternative to the Breeder’s and Lande’s Equations
Houchmandzadeh, Bahram
2013-01-01
The breeder’s equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) − E(Z0) to the selection differential S = E(ZW) − E(Z0) through a simple proportionality relation R = h2S, where the heritability coefficient h2 is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R′ = j2S′ can be derived, regardless of the parental genotype distribution. Here R′ = E(Z1) − μ and S′ = E(ZW) − μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j2 is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande’s equation Δz¯=GP−1S. The linearity coefficient of the alternative equation are not changed by Gaussian selection. PMID:24212080
A Method for Approximating the Bivariate Normal Correlation Coefficient.
ERIC Educational Resources Information Center
Kirk, David B.
Improvements of the Gaussian quadrature in conjunction with the Newton-Raphson iteration technique (TM 000 789) are discussed as effective methods of calculating the bivariate normal correlation coefficient. (CK)
Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0-2500 s/mm2 with one number of excitations [NEX]) and five b-values (0-2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions.
Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0–2500 s/mm2 with one number of excitations [NEX]) and five b-values (0–2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions. PMID:29494639
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N; White, Devin A; Urban, Marie L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less
Discrimination of particulate matter emission sources using stochastic methods
NASA Astrophysics Data System (ADS)
Szczurek, Andrzej; Maciejewska, Monika; Wyłomańska, Agnieszka; Sikora, Grzegorz; Balcerek, Michał; Teuerle, Marek
2016-12-01
Particulate matter (PM) is one of the criteria pollutants which has been determined as harmful to public health and the environment. For this reason the ability to recognize its emission sources is very important. There are a number of measurement methods which allow to characterize PM in terms of concentration, particles size distribution, and chemical composition. All these information are useful to establish a link between the dust found in the air, its emission sources and influence on human as well as the environment. However, the methods are typically quite sophisticated and not applicable outside laboratories. In this work, we considered PM emission source discrimination method which is based on continuous measurements of PM concentration with a relatively cheap instrument and stochastic analysis of the obtained data. The stochastic analysis is focused on the temporal variation of PM concentration and it involves two steps: (1) recognition of the category of distribution for the data i.e. stable or the domain of attraction of stable distribution and (2) finding best matching distribution out of Gaussian, stable and normal-inverse Gaussian (NIG). We examined six PM emission sources. They were associated with material processing in industrial environment, namely machining and welding aluminum, forged carbon steel and plastic with various tools. As shown by the obtained results, PM emission sources may be distinguished based on statistical distribution of PM concentration variations. Major factor responsible for the differences detectable with our method was the type of material processing and the tool applied. In case different materials were processed by the same tool the distinction of emission sources was difficult. For successful discrimination it was crucial to consider size-segregated mass fraction concentrations. In our opinion the presented approach is very promising. It deserves further study and development.
The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise
NASA Astrophysics Data System (ADS)
Guo, Qin; Sun, Zhongkui; Xu, Wei
2016-05-01
The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.
Hermite-Gaussian beams with self-forming spiral phase distribution
NASA Astrophysics Data System (ADS)
Zinchik, Alexander A.; Muzychenko, Yana B.
2014-05-01
Spiral laser beams is a family of laser beams that preserve the structural stability up to scale and rotate with the propagation. Properties of spiral beams are of practical interest for laser technology, medicine and biotechnology. Researchers use a spiral beams for movement and manipulation of microparticles. Spiral beams have a complicated phase distribution in cross section. This paper describes the results of analytical and computer simulation of Hermite-Gaussian beams with self-forming spiral phase distribution. In the simulation used a laser beam consisting of the sum of the two modes HG TEMnm and TEMn1m1. The coefficients n1, n, m1, m were varied. Additional phase depending from the coefficients n, m, m1, n1 imposed on the resulting beam. As a result, formed the Hermite Gaussian beam phase distribution which takes the form of a spiral in the process of distribution. For modeling was used VirtualLab 5.0 (manufacturer LightTrans GmbH).
Effects of beam irregularity on uniform scanning
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Jang, Sea duk; Yang, Tae-Keun
2016-09-01
An active scanning beam delivery method has many advantages in particle beam applications. For the beam is to be successfully delivered to the target volume by using the active scanning technique, the dose uniformity must be considered and should be at least 2.5% in the case of therapy application. During beam irradiation, many beam parameters affect the 2-dimensional uniformity at the target layer. A basic assumption in the beam irradiation planning stage is that the shape of the beam is symmetric and follows a Gaussian distribution. In this study, a pure Gaussian-shaped beam distribution was distorted by adding parasitic Gaussian distribution. An appropriate uniform scanning condition was deduced by using a quantitative analysis based on the gamma value of the distorted beam and 2-dimensional uniformities.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Hayashi, Norio; Miyati, Tosiaki; Takanaga, Masako; Ohno, Naoki; Hamaguchi, Takashi; Kozaka, Kazuto; Sanada, Shigeru; Yamamoto, Tomoyuki; Matsui, Osamu
2011-01-01
In the direction where the phased array coil used in parallel magnetic resonance imaging (MRI) is perpendicular to the arrangement, sensitivity falls significantly. Moreover, in a 3.0 tesla (3T) abdominal MRI, the quality of the image is reduced by changes in the relaxation time, reinforcement of the magnetic susceptibility effect, etc. In a 3T MRI, which has a high resonant frequency, the signal of the depths (central part) is reduced in the trunk part. SCIC, which is sensitivity correction processing, has inadequate correction processing, such as that edges are emphasized and the central part is corrected. Therefore, we used 3T with a Gaussian distribution. The uneven compensation processing for sensitivity of an abdomen MR image was considered. The correction processing consisted of the following methods. 1) The center of gravity of the domain of the human body in an abdomen MR image was calculated. 2) The correction coefficient map was created from the center of gravity using the Gaussian distribution. 3) The sensitivity correction image was created from the correction coefficient map and the original picture image. Using the Gaussian correction to process the image, the uniformity calculated using the NEMA method was improved significantly compared to the original image of a phantom. In a visual evaluation by radiologists, the uniformity was improved significantly using the Gaussian correction processing. Because of the homogeneous improvement of the abdomen image taken using 3T MRI, the Gaussian correction processing is considered to be a very useful technique.
MSEE: Stochastic Cognitive Linguistic Behavior Models for Semantic Sensing
2013-09-01
recognition, a Gaussian Process Dynamic Model with Social Network Analysis (GPDM-SNA) for a small human group action recognition, an extended GPDM-SNA...44 3.2. Small Human Group Activity Modeling Based on Gaussian Process Dynamic Model and Social Network Analysis (SN-GPDM...51 Approved for public release; distribution unlimited. 3 3.2.3. Gaussian Process Dynamical Model and
Analysis of low altitude atmospheric turbulence data measured in flight
NASA Technical Reports Server (NTRS)
Ganzer, V. M.; Joppa, R. G.; Vanderwees, G.
1977-01-01
All three components of turbulence were measured simultaneously in flight at each wing tip of a Beech D-18 aircraft. The flights were conducted at low altitude, 30.5 - 61.0 meters (100-200 ft.), over water in the presence of wind driven turbulence. Statistical properties of flight measured turbulence were compared with Gaussian and non-Gaussian turbulence models. Spatial characteristics of the turbulence were analyzed using the data from flight perpendicular and parallel to the wind. The probability density distributions of the vertical gusts show distinctly non-Gaussian characteristics. The distributions of the longitudinal and lateral gusts are generally Gaussian. The power spectra compare in the inertial subrange at some points better with the Dryden spectrum, while at other points the von Karman spectrum is a better approximation. In the low frequency range the data show peaks or dips in the power spectral density. The cross between vertical gusts in the direction of the mean wind were compared with a matched non-Gaussian model. The real component of the cross spectrum is in general close to the non-Gaussian model. The imaginary component, however, indicated a larger phase shift between these two gust components than was found in previous research.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
On the identification of Dragon Kings among extreme-valued outliers
NASA Astrophysics Data System (ADS)
Riva, M.; Neuman, S. P.; Guadagnini, A.
2013-07-01
Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
Fresnel zone plate with apodized aperture for hard X-ray Gaussian beam optics.
Takeuchi, Akihisa; Uesugi, Kentaro; Suzuki, Yoshio; Itabashi, Seiichi; Oda, Masatoshi
2017-05-01
Fresnel zone plates with apodized apertures [apodization FZPs (A-FZPs)] have been developed to realise Gaussian beam optics in the hard X-ray region. The designed zone depth of A-FZPs gradually decreases from the center to peripheral regions. Such a zone structure forms a Gaussian-like smooth-shouldered aperture function which optically behaves as an apodization filter and produces a Gaussian-like focusing spot profile. Optical properties of two types of A-FZP, i.e. a circular type and a one-dimensional type, have been evaluated by using a microbeam knife-edge scan test, and have been carefully compared with those of normal FZP optics. Advantages of using A-FZPs are introduced.
Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.
Tinkelman, Igor; Melamed, Timor
2005-06-01
The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.
NASA Astrophysics Data System (ADS)
Liu, Pusheng; Lü, Baida
2007-04-01
By using the vectorial Debye diffraction theory, phase singularities of high numerical aperture (NA) dark-hollow Gaussian beams in the focal region are studied. The dependence of phase singularities on the truncation parameter δ and semi-aperture angle α (or equally, NA) is illustrated numerically. A comparison of phase singularities of high NA dark-hollow Gaussian beams with those of scalar paraxial Gaussian beams and high NA Gaussian beams is made. For high NA dark-hollow Gaussian beams the beam order n additionally affects the spatial distribution of phase singularities, and there exist phase singularities outside the focal plane, which may be created or annihilated by variation of the semi-aperture angle in a certain region.
Quantification of Gaussian quantum steering.
Kogias, Ioannis; Lee, Antony R; Ragy, Sammy; Adesso, Gerardo
2015-02-13
Einstein-Podolsky-Rosen steering incarnates a useful nonclassical correlation which sits between entanglement and Bell nonlocality. While a number of qualitative steering criteria exist, very little has been achieved for what concerns quantifying steerability. We introduce a computable measure of steering for arbitrary bipartite Gaussian states of continuous variable systems. For two-mode Gaussian states, the measure reduces to a form of coherent information, which is proven never to exceed entanglement, and to reduce to it on pure states. We provide an operational connection between our measure and the key rate in one-sided device-independent quantum key distribution. We further prove that Peres' conjecture holds in its stronger form within the fully Gaussian regime: namely, steering bound entangled Gaussian states by Gaussian measurements is impossible.
Multipartite Gaussian steering: Monogamy constraints and quantum cryptography applications
NASA Astrophysics Data System (ADS)
Xiang, Yu; Kogias, Ioannis; Adesso, Gerardo; He, Qiongyi
2017-01-01
We derive laws for the distribution of quantum steering among different parties in multipartite Gaussian states under Gaussian measurements. We prove that a monogamy relation akin to the generalized Coffman-Kundu-Wootters inequality holds quantitatively for a recently introduced measure of Gaussian steering. We then define the residual Gaussian steering, stemming from the monogamy inequality, as an indicator of collective steering-type correlations. For pure three-mode Gaussian states, the residual acts as a quantifier of genuine multipartite steering, and is interpreted operationally in terms of the guaranteed key rate in the task of secure quantum secret sharing. Optimal resource states for the latter protocol are identified, and their possible experimental implementation discussed. Our results pin down the role of multipartite steering for quantum communication.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
Scheven, U M
2013-12-01
This paper describes a new variant of established stimulated echo pulse sequences, and an analytical method for determining diffusion or dispersion coefficients for Gaussian or non-Gaussian displacement distributions. The unipolar displacement encoding PFGSTE sequence uses trapezoidal gradient pulses of equal amplitude g and equal ramp rates throughout while sampling positive and negative halves of q-space. Usefully, the equal gradient amplitudes and gradient ramp rates help to reduce the impact of experimental artefacts caused by residual amplifier transients, eddy currents, or ferromagnetic hysteresis in components of the NMR magnet. The pulse sequence was validated with measurements of diffusion in water and of dispersion in flow through a packing of spheres. The analytical method introduced here permits the robust determination of the variance of non-Gaussian, dispersive displacement distributions. The noise sensitivity of the analytical method is shown to be negligible, using a demonstration experiment with a non-Gaussian longitudinal displacement distribution, measured on flow through a packing of mono-sized spheres. Copyright © 2013 Elsevier Inc. All rights reserved.
Following a trend with an exponential moving average: Analytical results for a Gaussian model
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Serror, Jeremy
2014-01-01
We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.
Gu, Shoou-Lian Hwang; Gau, Susan Shur-Fen; Tzang, Shyh-Weir; Hsu, Wen-Yau
2013-11-01
We investigated the three parameters (mu, sigma, tau) of ex-Gaussian distribution of RT derived from the Conners' continuous performance test (CCPT) and examined the moderating effects of the energetic factors (the inter-stimulus intervals (ISIs) and Blocks) among these three parameters, especially tau, an index describing the positive skew of RT distribution. We assessed 195 adolescents with DSM-IV ADHD, and 90 typically developing (TD) adolescents, aged 10-16. Participants and their parents received psychiatric interviews to confirm the diagnosis of ADHD and other psychiatric disorders. Participants also received intelligence (WISC-III) and CCPT assessments. We found that participants with ADHD had a smaller mu, and larger tau. As the ISI/Block increased, the magnitude of group difference in tau increased. Among the three ex-Gaussian parameters, tau was positively associated with omission errors, and mu was negatively associated with commission errors. The moderating effects of ISIs and Blocks on tau parameters suggested that the ex-Gaussian parameters could offer more information about the attention state in vigilance task, especially in ADHD. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling of skin cancer dermatoscopy images
NASA Astrophysics Data System (ADS)
Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.
2018-04-01
An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.
Wasito, Ito; Hashim, Siti Zaiton M; Sukmaningrum, Sri
2007-01-01
Gene expression profiling plays an important role in the identification of biological and clinical properties of human solid tumors such as colorectal carcinoma. Profiling is required to reveal underlying molecular features for diagnostic and therapeutic purposes. A non-parametric density-estimation-based approach called iterative local Gaussian clustering (ILGC), was used to identify clusters of expressed genes. We used experimental data from a previous study by Muro and others consisting of 1,536 genes in 100 colorectal cancer and 11 normal tissues. In this dataset, the ILGC finds three clusters, two large and one small gene clusters, similar to their results which used Gaussian mixture clustering. The correlation of each cluster of genes and clinical properties of malignancy of human colorectal cancer was analysed for the existence of tumor or normal, the existence of distant metastasis and the existence of lymph node metastasis. PMID:18305825
Wasito, Ito; Hashim, Siti Zaiton M; Sukmaningrum, Sri
2007-12-30
Gene expression profiling plays an important role in the identification of biological and clinical properties of human solid tumors such as colorectal carcinoma. Profiling is required to reveal underlying molecular features for diagnostic and therapeutic purposes. A non-parametric density-estimation-based approach called iterative local Gaussian clustering (ILGC), was used to identify clusters of expressed genes. We used experimental data from a previous study by Muro and others consisting of 1,536 genes in 100 colorectal cancer and 11 normal tissues. In this dataset, the ILGC finds three clusters, two large and one small gene clusters, similar to their results which used Gaussian mixture clustering. The correlation of each cluster of genes and clinical properties of malignancy of human colorectal cancer was analysed for the existence of tumor or normal, the existence of distant metastasis and the existence of lymph node metastasis.
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Acoustical tweezers using single spherically focused piston, X-cut, and Gaussian beams.
Mitri, Farid G
2015-10-01
Partial-wave series expansions (PWSEs) satisfying the Helmholtz equation in spherical coordinates are derived for circular spherically focused piston (i.e., apodized by a uniform velocity amplitude normal to its surface), X-cut (i.e., apodized by a velocity amplitude parallel to the axis of wave propagation), and Gaussian (i.e., apodized by a Gaussian distribution of the velocity amplitude) beams. The Rayleigh-Sommerfeld diffraction integral and the addition theorems for the Legendre and spherical wave functions are used to obtain the PWSEs assuming weakly focused beams (with focusing angle α ⩽ 20°) in the Fresnel-Kirchhoff (parabolic) approximation. In contrast with previous analytical models, the derived expressions allow computing the scattering and acoustic radiation force from a sphere of radius a without restriction to either the Rayleigh (a ≪ λ, where λ is the wavelength of the incident radiation) or the ray acoustics (a ≫λ) regimes. The analytical formulations are valid for wavelengths largely exceeding the radius of the focused acoustic radiator, when the viscosity of the surrounding fluid can be neglected, and when the sphere is translated along the axis of wave propagation. Computational results illustrate the analysis with particular emphasis on the sphere's elastic properties and the axial distance to the center of the concave surface, with close connection of the emergence of negative trapping forces. Potential applications are in single-beam acoustical tweezers, acoustic levitation, and particle manipulation.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Color-magnitude distribution of face-on nearby galaxies in Sloan digital sky survey DR7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuo-Wen; Feng, Long-Long; Gu, Qiusheng
2014-05-20
We have analyzed the distributions in the color-magnitude diagram (CMD) of a large sample of face-on galaxies to minimize the effect of dust extinctions on galaxy color. About 300,000 galaxies with log (a/b) < 0.2 and redshift z < 0.2 are selected from the Sloan Digital Sky Survey DR7 catalog. Two methods are employed to investigate the distributions of galaxies in the CMD, including one-dimensional (1D) Gaussian fitting to the distributions in individual magnitude bins and two-dimensional (2D) Gaussian mixture model (GMM) fitting to galaxies as a whole. We find that in the 1D fitting, two Gaussians are not enoughmore » to fit galaxies with the excess present between the blue cloud and the red sequence. The fitting to this excess defines the center of the green valley in the local universe to be (u – r){sub 0.1} = –0.121M {sub r,} 0{sub .1} – 0.061. The fraction of blue cloud and red sequence galaxies turns over around M {sub r,} {sub 0.1} ∼ –20.1 mag, corresponding to stellar mass of 3 × 10{sup 10} M {sub ☉}. For the 2D GMM fitting, a total of four Gaussians are required, one for the blue cloud, one for the red sequence, and the additional two for the green valley. The fact that two Gaussians are needed to describe the distributions of galaxies in the green valley is consistent with some models that argue for two different evolutionary paths from the blue cloud to the red sequence.« less
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Some blood chemistry values for the juvenile coho salmon (Oncorhynchus kisutch)
Wedemeyer, Gary; Chatterton, K.
1971-01-01
Overlapping Gaussian distribution curves were resolved into normal ranges for 1800 clinical test values obtained from caudal arterial blood or plasma of more than 1000 juvenile coho salmon (Oncorhynchus kisutch) held under defined conditions of diet and temperature. Estimated normal blood chemistry ranges were bicarbonate, 9.5–12.6 mEq/liter; blood urea nitrogen (BUN), 0.9–3.4 mg/100 ml; chloride, 122–136 mEq/liter; cholesterol, 88–262 mg/100 ml;pCO2, 2.6–6.1 mm Hg (10 C); glucose, 41–135 mg/100 ml; hematocrit, 32.5–52.5%; hemoglobin, 6.5–9.9 g/100 ml; total protein, 1.4–4.3 g/100 ml; blood pH (10 C), 7.51–7.83. The calculated range of normal acid–base balance vs. water temperature is also presented.
Spin-Hall effect in the scattering of structured light from plasmonic nanowire
NASA Astrophysics Data System (ADS)
Sharma, Deepak K.; Kumar, Vijay; Vasista, Adarsh B.; Chaubey, Shailendra K.; Kumar, G. V. Pavan
2018-06-01
Spin-orbit interactions are subwavelength phenomena which can potentially lead to numerous device related applications in nanophotonics. Here, we report Spin-Hall effect in the forward scattering of Hermite-Gaussian and Gaussian beams from a plasmonic nanowire. Asymmetric scattered radiation distribution was observed for circularly polarized beams. Asymmetry in the scattered radiation distribution changes the sign when the polarization handedness inverts. We found a significant enhancement in the Spin-Hall effect for Hermite-Gaussian beam as compared to Gaussian beam for constant input power. The difference between scattered powers perpendicular to the long axis of the plasmonic nanowire was used to quantify the enhancement. In addition to it, nodal line of HG beam acts as the marker for the Spin-Hall shift. Numerical calculations corroborate experimental observations and suggest that the Spin flow component of Poynting vector associated with the circular polarization is responsible for the Spin-Hall effect and its enhancement.
NASA Astrophysics Data System (ADS)
Chu, X. X.; Liu, Z. J.; Wu, Y.
2008-07-01
Based on the Huygens-Fresnel integral, the properties of a circular flattened Gaussian beam through a stigmatic optical system in turbulent atmosphere are investigated. Analytical formulas for the average intensity are derived. As elementary examples, the average intensity distributions of a collimated circular flattened Gaussian beam and a focused circular flattened Gaussian beam through a simple optical system are studied. To see the effects of the optical system on the propagation, the average intensity distributions of the beam for direct propagation are also studied. From the analysis, comparison and numerical calculation we can see that there are many differences between the two propagations. These differences are due to the geometrical magnification of the optical system, different diffraction and different turbulence-induced spreading. Namely, an optical system not only affects the diffraction but also affects the turbulence-induced spreading.
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2018-05-01
We examine the mode coupling in vortex beams. Mode coupling also known as the crosstalk takes place due to turbulent characteristics of the atmospheric communication medium. This way, the transmitted intrinsic mode of the vortex beam leaks power to other extrinsic modes, thus preventing the correct detection of the transmitted symbol which is usually encoded into the mode index or the orbital angular momentum state of the vortex beam. Here we investigate the normalized power mode coupling ratios of several types of vortex beams, namely, Gaussian vortex beam, Bessel Gaussian beam, hypergeometric Gaussian beam and Laguerre Gaussian beam. It is found that smaller mode numbers lead to less mode coupling. The same is partially observed for increasing source sizes. Comparing the vortex beams amongst themselves, it is seen that hypergeometric Gaussian beam is the one retaining the most power in intrinsic mode during propagation, but only at lowest mode index of unity. At higher mode indices this advantage passes over to the Gaussian vortex beam.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
Richardson, Magnus J E; Gerstner, Wulfram
2005-04-01
The subthreshold membrane voltage of a neuron in active cortical tissue is a fluctuating quantity with a distribution that reflects the firing statistics of the presynaptic population. It was recently found that conductance-based synaptic drive can lead to distributions with a significant skew. Here it is demonstrated that the underlying shot noise caused by Poissonian spike arrival also skews the membrane distribution, but in the opposite sense. Using a perturbative method, we analyze the effects of shot noise on the distribution of synaptic conductances and calculate the consequent voltage distribution. To first order in the perturbation theory, the voltage distribution is a gaussian modulated by a prefactor that captures the skew. The gaussian component is identical to distributions derived using current-based models with an effective membrane time constant. The well-known effective-time-constant approximation can therefore be identified as the leading-order solution to the full conductance-based model. The higher-order modulatory prefactor containing the skew comprises terms due to both shot noise and conductance fluctuations. The diffusion approximation misses these shot-noise effects implying that analytical approaches such as the Fokker-Planck equation or simulation with filtered white noise cannot be used to improve on the gaussian approximation. It is further demonstrated that quantities used for fitting theory to experiment, such as the voltage mean and variance, are robust against these non-Gaussian effects. The effective-time-constant approximation is therefore relevant to experiment and provides a simple analytic base on which other pertinent biological details may be added.
Numerical solution of transport equation for applications in environmental hydraulics and hydrology
NASA Astrophysics Data System (ADS)
Rashidul Islam, M.; Hanif Chaudhry, M.
1997-04-01
The advective term in the one-dimensional transport equation, when numerically discretized, produces artificial diffusion. To minimize such artificial diffusion, which vanishes only for Courant number equal to unity, transport owing to advection has been modeled separately. The numerical solution of the advection equation for a Gaussian initial distribution is well established; however, large oscillations are observed when applied to an initial distribution with sleep gradients, such as trapezoidal distribution of a constituent or propagation of mass from a continuous input. In this study, the application of seven finite-difference schemes and one polynomial interpolation scheme is investigated to solve the transport equation for both Gaussian and non-Gaussian (trapezoidal) initial distributions. The results obtained from the numerical schemes are compared with the exact solutions. A constant advective velocity is assumed throughout the transport process. For a Gaussian distribution initial condition, all eight schemes give excellent results, except the Lax scheme which is diffusive. In application to the trapezoidal initial distribution, explicit finite-difference schemes prove to be superior to implicit finite-difference schemes because the latter produce large numerical oscillations near the steep gradients. The Warming-Kutler-Lomax (WKL) explicit scheme is found to be better among this group. The Hermite polynomial interpolation scheme yields the best result for a trapezoidal distribution among all eight schemes investigated. The second-order accurate schemes are sufficiently accurate for most practical problems, but the solution of unusual problems (concentration with steep gradient) requires the application of higher-order (e.g. third- and fourth-order) accurate schemes.
A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.
Zhao, Lei; Mi, Dong; Sun, Yeqing
2017-05-07
The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biktashev, Vadim N
2014-04-01
We consider a simple mathematical model of gradual Darwinian evolution in continuous time and continuous trait space, due to intraspecific competition for common resource in an asexually reproducing population in constant environment, while far from evolutionary stable equilibrium. The model admits exact analytical solution. In particular, Gaussian distribution of the trait emerges from generic initial conditions.
Continuous-variable quantum key distribution with Gaussian source noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yujie; Peng Xiang; Yang Jian
2011-05-15
Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
A Gaussian method to improve work-of-breathing calculations.
Petrini, M F; Evans, J N; Wall, M A; Norman, J R
1995-01-01
The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
The effect of spherical aberration on the phase singularities of focused dark-hollow Gaussian beams
NASA Astrophysics Data System (ADS)
Luo, Yamei; Lü, Baida
2009-06-01
The phase singularities of focused dark-hollow Gaussian beams in the presence of spherical aberration are studied. It is shown that the evolution behavior of phase singularities of focused dark-hollow Gaussian beams in the focal region depends not only on the truncation parameter and beam order, but also on the spherical aberration. The spherical aberration leads to an asymmetric spatial distribution of singularities outside the focal plane and to a shift of singularities near the focal plane. The reorganization process of singularities and spatial distribution of singularities are additionally dependent on the sign of the spherical aberration. The results are illustrated by numerical examples.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561
1985-01-01
a number of problems chosen so that the risk of SHM break-down wa.s minimized. A beautiful example is the absorption coefficient of a...the aporo~ cimation We consider here the case of one normalized Gaussian, to isolate the effects of LilA from those of the neglect of the *Interaction
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Efficiency-enhanced photon sieve using Gaussian/overlapping distribution of pinholes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabatyan, A.; Mirzaie, S.
2011-04-10
A class of photon sieve is introduced whose structure is based on the overlapping pinholes in the innermost zones. This kind of distribution is produced by, for example, a particular form of Gaussian function. The focusing property of the proposed model was examined theoretically and experimentally. It is shown that under He-Ne laser and white light illumination, the focal spot size of this novel structure has considerably smaller FWHM than a photon sieve with randomly distributed pinholes and a Fresnel zone plate. In addition, secondary maxima have been suppressed effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosales-Zarate, Laura E. C.; Drummond, P. D.
We calculate the quantum Renyi entropy in a phase-space representation for either fermions or bosons. This can also be used to calculate purity and fidelity, or the entanglement between two systems. We show that it is possible to calculate the entropy from sampled phase-space distributions in normally ordered representations, although this is not possible for all quantum states. We give an example of the use of this method in an exactly soluble thermal case. The quantum entropy cannot be calculated at all using sampling methods in classical symmetric (Wigner) or antinormally ordered (Husimi) phase spaces, due to inner-product divergences. Themore » preferred method is to use generalized Gaussian phase-space methods, which utilize a distribution over stochastic Green's functions. We illustrate this approach by calculating the reduced entropy and entanglement of bosonic or fermionic modes coupled to a time-evolving, non-Markovian reservoir.« less
Non-Gaussian limit fluctuations in active swimmer suspensions
NASA Astrophysics Data System (ADS)
Kurihara, Takashi; Aridome, Msato; Ayade, Heev; Zaid, Irwin; Mizuno, Daisuke
2017-03-01
We investigate the hydrodynamic fluctuations in suspensions of swimming microorganisms (Chlamydomonas) by observing the probe particles dispersed in the media. Short-term fluctuations of probe particles were superdiffusive and displayed heavily tailed non-Gaussian distributions. The analytical theory that explains the observed distribution was derived by summing the power-law-decaying hydrodynamic interactions from spatially distributed field sources (here, swimming microorganisms). The summing procedure, which we refer to as the physical limit operation, is applicable to a variety of physical fluctuations to which the classical central limiting theory does not apply. Extending the analytical formula to compare to experiments in active swimmer suspensions, we show that the non-Gaussian shape of the observed distribution obeys the analytic theory concomitantly with independently determined parameters such as the strength of force generations and the concentration of Chlamydomonas. Time evolution of the distributions collapsed to a single master curve, except for their extreme tails, for which our theory presents a qualitative explanation. Investigations thereof and the complete agreement with theoretical predictions revealed broad applicability of the formula to dispersions of active sources of fluctuations.
Theoretical study of sum-frequency vibrational spectroscopy on limonene surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ren-Hui, E-mail: zrh@iccas.ac.cn; Liu, Hao; Jing, Yuan-Yuan
2014-03-14
By combining molecule dynamics (MD) simulation and quantum chemistry computation, we calculate the surface sum-frequency vibrational spectroscopy (SFVS) of R-limonene molecules at the gas-liquid interface for SSP, PPP, and SPS polarization combinations. The distributions of the Euler angles are obtained using MD simulation, the ψ-distribution is between isotropic and Gaussian. Instead of the MD distributions, different analytical distributions such as the δ-function, Gaussian and isotropic distributions are applied to simulate surface SFVS. We find that different distributions significantly affect the absolute SFVS intensity and also influence on relative SFVS intensity, and the δ-function distribution should be used with caution whenmore » the orientation distribution is broad. Furthermore, the reason that the SPS signal is weak in reflected arrangement is discussed.« less
Flat-top beam for laser-stimulated pain
NASA Astrophysics Data System (ADS)
McCaughey, Ryan; Nadeau, Valerie; Dickinson, Mark
2005-04-01
One of the main problems during laser stimulation in human pain research is the risk of tissue damage caused by excessive heating of the skin. This risk has been reduced by using a laser beam with a flattop (or superGaussian) intensity profile, instead of the conventional Gaussian beam. A finite difference approximation to the heat conduction equation has been applied to model the temperature distribution in skin as a result of irradiation by flattop and Gaussian profile CO2 laser beams. The model predicts that a 15 mm diameter, 15 W, 100 ms CO2 laser pulse with an order 6 superGaussian profile produces a maximum temperature 6 oC less than a Gaussian beam with the same energy density. A superGaussian profile was created by passing a Gaussian beam through a pair of zinc selenide aspheric lenses which refract the more intense central region of the beam towards the less intense periphery. The profiles of the lenses were determined by geometrical optics. In human pain trials the superGaussian beam required more power than the Gaussian beam to reach sensory and pain thresholds.
NASA Astrophysics Data System (ADS)
Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli
2017-08-01
We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
The topology of galaxy clustering.
NASA Astrophysics Data System (ADS)
Coles, P.; Plionis, M.
The authors discuss an objective method for quantifying the topology of the galaxy distribution using only projected galaxy counts. The method is a useful complement to fully three-dimensional studies of topology based on the genus by virtue of the enormous projected data sets available. Applying the method to the Lick counts they find no evidence for large-scale non-gaussian behaviour, whereas the small-scale distribution is strongly non-gaussian, with a shift in the meatball direction.
A model of non-Gaussian diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2018-04-01
Recent progress in single-particle tracking has shown evidence of the non-Gaussian distribution of displacements in living cells, both near the cellular membrane and inside the cytoskeleton. Similar behavior has also been observed in granular materials, turbulent flows, gels and colloidal suspensions, suggesting that this is a general feature of diffusion in complex media. A possible interpretation of this phenomenon is that a tracer explores a medium with spatio-temporal fluctuations which result in local changes of diffusivity. We propose and investigate an ergodic, easily interpretable model, which implements the concept of diffusing diffusivity. Depending on the parameters, the distribution of displacements can be either flat or peaked at small displacements with an exponential tail at large displacements. We show that the distribution converges slowly to a Gaussian one. We calculate statistical properties, derive the asymptotic behavior and discuss some implications and extensions.
Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics
NASA Astrophysics Data System (ADS)
Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc
2018-02-01
Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.
Terawatt x-ray free-electron-laser optimization by transverse electron distribution shaping
Emma, C.; Wu, J.; Fang, K.; ...
2014-11-03
We study the dependence of the peak power of a 1.5 Å Terawatt (TW), tapered x-ray free-electron laser (FEL) on the transverse electron density distribution. Multidimensional optimization schemes for TW hard x-ray free-electron lasers are applied to the cases of transversely uniform and parabolic electron beam distributions and compared to a Gaussian distribution. The optimizations are performed for a 200 m undulator and a resonant wavelength of λ r = 1.5 Å using the fully three-dimensional FEL particle code GENESIS. The study shows that the flatter transverse electron distributions enhance optical guiding in the tapered section of the undulator andmore » increase the maximum radiation power from a maximum of 1.56 TW for a transversely Gaussian beam to 2.26 TW for the parabolic case and 2.63 TW for the uniform case. Spectral data also shows a 30%–70% reduction in energy deposited in the sidebands for the uniform and parabolic beams compared with a Gaussian. An analysis of the transverse coherence of the radiation shows the coherence area to be much larger than the beam spotsize for all three distributions, making coherent diffraction imaging experiments possible.« less
Measuring Gaussian quantum information and correlations using the Rényi entropy of order 2.
Adesso, Gerardo; Girolami, Davide; Serafini, Alessio
2012-11-09
We demonstrate that the Rényi-2 entropy provides a natural measure of information for any multimode Gaussian state of quantum harmonic systems, operationally linked to the phase-space Shannon sampling entropy of the Wigner distribution of the state. We prove that, in the Gaussian scenario, such an entropy satisfies the strong subadditivity inequality, a key requirement for quantum information theory. This allows us to define and analyze measures of Gaussian entanglement and more general quantum correlations based on such an entropy, which are shown to satisfy relevant properties such as monogamy.
Analysis of Realized Volatility for Nikkei Stock Average on the Tokyo Stock Exchange
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya; Watanabe, Toshiaki
2016-04-01
We calculate realized volatility of the Nikkei Stock Average (Nikkei225) Index on the Tokyo Stock Exchange and investigate the return dynamics. To avoid the bias on the realized volatility from the non-trading hours issue we calculate realized volatility separately in the two trading sessions, i.e. morning and afternoon, of the Tokyo Stock Exchange and find that the microstructure noise decreases the realized volatility at small sampling frequency. Using realized volatility as a proxy of the integrated volatility we standardize returns in the morning and afternoon sessions and investigate the normality of the standardized returns by calculating variance, kurtosis and 6th moment. We find that variance, kurtosis and 6th moment are consistent with those of the standard normal distribution, which indicates that the return dynamics of the Nikkei Stock Average are well described by a Gaussian random process with time-varying volatility.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
Kinetic energy distribution of multiply charged ions in Coulomb explosion of Xe clusters.
Heidenreich, Andreas; Jortner, Joshua
2011-02-21
We report on the calculations of kinetic energy distribution (KED) functions of multiply charged, high-energy ions in Coulomb explosion (CE) of an assembly of elemental Xe(n) clusters (average size (n) = 200-2171) driven by ultra-intense, near-infrared, Gaussian laser fields (peak intensities 10(15) - 4 × 10(16) W cm(-2), pulse lengths 65-230 fs). In this cluster size and pulse parameter domain, outer ionization is incomplete∕vertical, incomplete∕nonvertical, or complete∕nonvertical, with CE occurring in the presence of nanoplasma electrons. The KEDs were obtained from double averaging of single-trajectory molecular dynamics simulation ion kinetic energies. The KEDs were doubly averaged over a log-normal cluster size distribution and over the laser intensity distribution of a spatial Gaussian beam, which constitutes either a two-dimensional (2D) or a three-dimensional (3D) profile, with the 3D profile (when the cluster beam radius is larger than the Rayleigh length) usually being experimentally realized. The general features of the doubly averaged KEDs manifest the smearing out of the structure corresponding to the distribution of ion charges, a marked increase of the KEDs at very low energies due to the contribution from the persistent nanoplasma, a distortion of the KEDs and of the average energies toward lower energy values, and the appearance of long low-intensity high-energy tails caused by the admixture of contributions from large clusters by size averaging. The doubly averaged simulation results account reasonably well (within 30%) for the experimental data for the cluster-size dependence of the CE energetics and for its dependence on the laser pulse parameters, as well as for the anisotropy in the angular distribution of the energies of the Xe(q+) ions. Possible applications of this computational study include a control of the ion kinetic energies by the choice of the laser intensity profile (2D∕3D) in the laser-cluster interaction volume.
Gaussian Finite Element Method for Description of Underwater Sound Diffraction
NASA Astrophysics Data System (ADS)
Huang, Dehua
A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Dickie, David Alexander; Job, Dominic E.; Gonzalez, David Rodriguez; Shenkin, Susan D.; Wardlaw, Joanna M.
2015-01-01
Introduction Neurodegenerative disease diagnoses may be supported by the comparison of an individual patient’s brain magnetic resonance image (MRI) with a voxel-based atlas of normal brain MRI. Most current brain MRI atlases are of young to middle-aged adults and parametric, e.g., mean ±standard deviation (SD); these atlases require data to be Gaussian. Brain MRI data, e.g., grey matter (GM) proportion images, from normal older subjects are apparently not Gaussian. We created a nonparametric and a parametric atlas of the normal limits of GM proportions in older subjects and compared their classifications of GM proportions in Alzheimer’s disease (AD) patients. Methods Using publicly available brain MRI from 138 normal subjects and 138 subjects diagnosed with AD (all 55–90 years), we created: a mean ±SD atlas to estimate parametrically the percentile ranks and limits of normal ageing GM; and, separately, a nonparametric, rank order-based GM atlas from the same normal ageing subjects. GM images from AD patients were then classified with respect to each atlas to determine the effect statistical distributions had on classifications of proportions of GM in AD patients. Results The parametric atlas often defined the lower normal limit of the proportion of GM to be negative (which does not make sense physiologically as the lowest possible proportion is zero). Because of this, for approximately half of the AD subjects, 25–45% of voxels were classified as normal when compared to the parametric atlas; but were classified as abnormal when compared to the nonparametric atlas. These voxels were mainly concentrated in the frontal and occipital lobes. Discussion To our knowledge, we have presented the first nonparametric brain MRI atlas. In conditions where there is increasing variability in brain structure, such as in old age, nonparametric brain MRI atlases may represent the limits of normal brain structure more accurately than parametric approaches. Therefore, we conclude that the statistical method used for construction of brain MRI atlases should be selected taking into account the population and aim under study. Parametric methods are generally robust for defining central tendencies, e.g., means, of brain structure. Nonparametric methods are advisable when studying the limits of brain structure in ageing and neurodegenerative disease. PMID:26023913
Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-07-01
What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.
Gaussification and entanglement distillation of continuous-variable systems: a unifying picture.
Campbell, Earl T; Eisert, Jens
2012-01-13
Distillation of entanglement using only Gaussian operations is an important primitive in quantum communication, quantum repeater architectures, and distributed quantum computing. Existing distillation protocols for continuous degrees of freedom are only known to converge to a Gaussian state when measurements yield precisely the vacuum outcome. In sharp contrast, non-Gaussian states can be deterministically converted into Gaussian states while preserving their second moments, albeit by usually reducing their degree of entanglement. In this work-based on a novel instance of a noncommutative central limit theorem-we introduce a picture general enough to encompass the known protocols leading to Gaussian states, and new classes of protocols including multipartite distillation. This gives the experimental option of balancing the merits of success probability against entanglement produced.
Mean intensity of the fundamental Bessel-Gaussian beam in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Lukin, Igor P.
2017-11-01
In the given article mean intensity of a fundamental Bessel-Gaussian optical beam in turbulent atmosphere is studied. The problem analysis is based on the solution of the equation for the transverse second-order mutual coherence function of a fundamental Bessel-Gaussian beam of optical radiation. Distributions of mean intensity of a fundamental Bessel- Gaussian beam optical beam in longitudinal and transverse to a direction of propagation of optical radiation are investigated in detail. Influence of atmospheric turbulence on change of radius of the central part of a Bessel optical beam is estimated. Values of parameters at which it is possible to generate in turbulent atmosphere a nondiffracting pseudo-Bessel optical beam by means of a fundamental Bessel-Gaussian optical beam are established.
Continuous-variable measurement-device-independent quantum key distribution with photon subtraction
NASA Astrophysics Data System (ADS)
Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua
2018-04-01
It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.
Inflation in random Gaussian landscapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu
2017-05-01
We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less
A note about Gaussian statistics on a sphere
NASA Astrophysics Data System (ADS)
Chave, Alan D.
2015-11-01
The statistics of directional data on a sphere can be modelled either using the Fisher distribution that is conditioned on the magnitude being unity, in which case the sample space is confined to the unit sphere, or using the latitude-longitude marginal distribution derived from a trivariate Gaussian model that places no constraint on the magnitude. These two distributions are derived from first principles and compared. The Fisher distribution more closely approximates the uniform distribution on a sphere for a given small value of the concentration parameter, while the latitude-longitude marginal distribution is always slightly larger than the Fisher distribution at small off-axis angles for large values of the concentration parameter. Asymptotic analysis shows that the two distributions only become equivalent in the limit of large concentration parameter and very small off-axis angle.
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Gayen, Bishakhdatta; Alam, Meheboob
2011-08-01
From particle simulations of a sheared frictional granular gas, we show that the Coulomb friction can have dramatic effects on orientational correlation as well as on both the translational and angular velocity distribution functions even in the Boltzmann (dilute) limit. The dependence of orientational correlation on friction coefficient (μ) is found to be nonmonotonic, and the Coulomb friction plays a dual role of enhancing or diminishing the orientational correlation, depending on the value of the tangential restitution coefficient (which characterizes the roughness of particles). From the sticking limit (i.e., with no sliding contact) of rough particles, decreasing the Coulomb friction is found to reduce the density and spatial velocity correlations which, together with diminished orientational correlation for small enough μ, are responsible for the transition from non-gaussian to gaussian distribution functions in the double limit of small friction (μ→0) and nearly elastic particles (e→1). This double limit in fact corresponds to perfectly smooth particles, and hence the maxwellian (gaussian) is indeed a solution of the Boltzmann equation for a frictional granular gas in the limit of elastic collisions and zero Coulomb friction at any roughness. The high-velocity tails of both distribution functions seem to follow stretched exponentials even in the presence of Coulomb friction, and the related velocity exponents deviate strongly from a gaussian with increasing friction.
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan
2017-08-01
The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
NASA Astrophysics Data System (ADS)
Zhou, GuoQuan; Cai, YangJian; Dai, ChaoQing
2013-05-01
A kind of hollow vortex Gaussian beam is introduced. Based on the Collins integral, an analytical propagation formula of a hollow vortex Gaussian beam through a paraxial ABCD optical system is derived. Due to the special distribution of the optical field, which is caused by the initial vortex phase, the dark region of a hollow vortex Gaussian beam will not disappear upon propagation. The analytical expressions for the beam propagation factor, the kurtosis parameter, and the orbital angular momentum density of a hollow vortex Gaussian beam passing through a paraxial ABCD optical system are also derived, respectively. The beam propagation factor is determined by the beam order and the topological charge. The kurtosis parameter and the orbital angular momentum density depend on beam order n, topological charge m, parameter γ, and transfer matrix elements A and D. As a numerical example, the propagation properties of a hollow vortex Gaussian beam in free space are demonstrated. The hollow vortex Gaussian beam has eminent propagation stability and has crucial application prospects in optical micromanipulation.
Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis
2013-06-01
We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less
Synchronization of an ensemble of oscillators regulated by their spatial movement.
Sarkar, Sumantra; Parmananda, P
2010-12-01
Synchronization for a collection of oscillators residing in a finite two dimensional plane is explored. The coupling between any two oscillators in this array is unidirectional, viz., master-slave configuration. Initially the oscillators are distributed randomly in space and their autonomous time-periods follow a Gaussian distribution. The duty cycles of these oscillators, which work under an on-off scenario, are normally distributed as well. It is realized that random hopping of oscillators is a necessary condition for observing global synchronization in this ensemble of oscillators. Global synchronization in the context of the present work is defined as the state in which all the oscillators are rendered identical. Furthermore, there exists an optimal amplitude of random hopping for which the attainment of this global synchronization is the fastest. The present work is deemed to be of relevance to the synchronization phenomena exhibited by pulse coupled oscillators such as a collection of fireflies. © 2010 American Institute of Physics.
Measurement of Hubble constant: non-Gaussian errors in HST Key Project data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Meghendra; Gupta, Shashikant; Pandey, Ashwini
2016-08-01
Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.
Radiation pressure acceleration of corrugated thin foils by Gaussian and super-Gaussian beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adusumilli, K.; Goyal, D.; Tripathi, V. K.
Rayleigh-Taylor instability of radiation pressure accelerated ultrathin foils by laser having Gaussian and super-Gaussian intensity distribution is investigated using a single fluid code. The foil is allowed to have ring shaped surface ripples. The radiation pressure force on such a foil is non-uniform with finite transverse component F{sub r}; F{sub r} varies periodically with r. Subsequently, the ripple grows as the foil moves ahead along z. With a Gaussian beam, the foil acquires an overall curvature due to non-uniformity in radiation pressure and gets thinner. In the process, the ripple perturbation is considerably washed off. With super-Gaussian beam, the ripplemore » is found to be more strongly washed out. In order to avoid transmission of the laser through the thinning foil, a criterion on the foil thickness is obtained.« less
Double Wigner distribution function of a first-order optical system with a hard-edge aperture.
Pan, Weiqing
2008-01-01
The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
On the potential of models for location and scale for genome-wide DNA methylation data
2014-01-01
Background With the help of epigenome-wide association studies (EWAS), increasing knowledge on the role of epigenetic mechanisms such as DNA methylation in disease processes is obtained. In addition, EWAS aid the understanding of behavioral and environmental effects on DNA methylation. In terms of statistical analysis, specific challenges arise from the characteristics of methylation data. First, methylation β-values represent proportions with skewed and heteroscedastic distributions. Thus, traditional modeling strategies assuming a normally distributed response might not be appropriate. Second, recent evidence suggests that not only mean differences but also variability in site-specific DNA methylation associates with diseases, including cancer. The purpose of this study was to compare different modeling strategies for methylation data in terms of model performance and performance of downstream hypothesis tests. Specifically, we used the generalized additive models for location, scale and shape (GAMLSS) framework to compare beta regression with Gaussian regression on raw, binary logit and arcsine square root transformed methylation data, with and without modeling a covariate effect on the scale parameter. Results Using simulated and real data from a large population-based study and an independent sample of cancer patients and healthy controls, we show that beta regression does not outperform competing strategies in terms of model performance. In addition, Gaussian models for location and scale showed an improved performance as compared to models for location only. The best performance was observed for the Gaussian model on binary logit transformed β-values, referred to as M-values. Our results further suggest that models for location and scale are specifically sensitive towards violations of the distribution assumption and towards outliers in the methylation data. Therefore, a resampling procedure is proposed as a mode of inference and shown to diminish type I error rate in practically relevant settings. We apply the proposed method in an EWAS of BMI and age and reveal strong associations of age with methylation variability that are validated in an independent sample. Conclusions Models for location and scale are promising tools for EWAS that may help to understand the influence of environmental factors and disease-related phenotypes on methylation variability and its role during disease development. PMID:24994026
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
2009-11-01
is estimated using the Gaussian kernel function: c′(w, i) = N∑ j =1 c(w, j ) exp [−(i− j )2 2σ2 ] (2) where i and j are absolute positions of the...corresponding terms in the document, and N is the length of the document; c(w, j ) is the actual count of term w at position j . The PLM P (·|D, i) needs to...probability of rel- evance well. The distribution of relevance can be approximated as fol- lows: p(i|θrel) = ∑ j δ(Qj , i)∑ i ∑ j δ(Qj , i) (10
Cold dark matter and degree-scale cosmic microwave background anisotropy statistics after COBE
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Stompor, Radoslaw; Juszkiewicz, Roman
1993-01-01
We conduct a Monte Carlo simulation of the cosmic microwave background (CMB) anisotropy in the UCSB South Pole 1991 degree-scale experiment. We examine cold dark matter cosmology with large-scale structure seeded by the Harrison-Zel'dovich hierarchy of Gaussian-distributed primordial inhomogeneities normalized to the COBE-DMR measurement of large-angle CMB anisotropy. We find it statistically implausible (in the sense of low cumulative probability F lower than 5 percent, of not measuring a cosmological delta-T/T signal) that the degree-scale cosmological CMB anisotropy predicted in such models could have escaped a detection at the level of sensitivity achieved in the South Pole 1991 experiment.
Propagation of a cosh-Gaussian beam through an optical system in turbulent atmosphere.
Chu, Xiuxiang
2007-12-24
The propagation of a cosh-Gaussian beam through an arbitrary ABCD optical system in turbulent atmosphere has been investigated. The analytical expressions for the average intensity at any receiver plane are obtained. As an elementary example, the average intensity and its radius at the image plane of a cosh-Gaussian beam through a thin lens are studied. To show the effects of a lens on the average intensity and the intensity radius of the laser beam in turbulent atmosphere, the properties of a collimated cosh-Gaussian beam and a focused cosh-Gaussian beam for direct propagation in turbulent atmosphere are studied and numerically calculated. The average intensity profiles of a cosh-Gaussian beam through a lens can have a shape similar to that of the initial beam for a longer propagation distance than that of a collimated cosh-Gaussian beam for direct propagation. With the increment in the propagation distance, the average intensity radius at the image plane of a cosh-Gaussian beam through a thin lens will be smaller than that at the focal plane of a focused cosh-Gaussian beam for direct propagation. Meanwhile, the intensity distributions at the image plane of a cosh-Gaussian beam through a lens with different w(0) and Omega(0) are also studied.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
Quantum key distribution using gaussian-modulated coherent states
NASA Astrophysics Data System (ADS)
Grosshans, Frédéric; Van Assche, Gilles; Wenger, Jérôme; Brouri, Rosa; Cerf, Nicolas J.; Grangier, Philippe
2003-01-01
Quantum continuous variables are being explored as an alternative means to implement quantum key distribution, which is usually based on single photon counting. The former approach is potentially advantageous because it should enable higher key distribution rates. Here we propose and experimentally demonstrate a quantum key distribution protocol based on the transmission of gaussian-modulated coherent states (consisting of laser pulses containing a few hundred photons) and shot-noise-limited homodyne detection; squeezed or entangled beams are not required. Complete secret key extraction is achieved using a reverse reconciliation technique followed by privacy amplification. The reverse reconciliation technique is in principle secure for any value of the line transmission, against gaussian individual attacks based on entanglement and quantum memories. Our table-top experiment yields a net key transmission rate of about 1.7 megabits per second for a loss-free line, and 75 kilobits per second for a line with losses of 3.1dB. We anticipate that the scheme should remain effective for lines with higher losses, particularly because the present limitations are essentially technical, so that significant margin for improvement is available on both the hardware and software.
Dvořák, Martin; Svobodová, Jana; Dubský, Pavel; Riesová, Martina; Vigh, Gyula; Gaš, Bohuslav
2015-03-01
Although the classical formula of peak resolution was derived to characterize the extent of separation only for Gaussian peaks of equal areas, it is often used even when the peaks follow non-Gaussian distributions and/or have unequal areas. This practice can result in misleading information about the extent of separation in terms of the severity of peak overlap. We propose here the use of the equivalent peak resolution value, a term based on relative peak overlap, to characterize the extent of separation that had been achieved. The definition of equivalent peak resolution is not constrained either by the form(s) of the concentration distribution function(s) of the peaks (Gaussian or non-Gaussian) or the relative area of the peaks. The equivalent peak resolution value and the classically defined peak resolution value are numerically identical when the separated peaks are Gaussian and have identical areas and SDs. Using our new freeware program, Resolution Analyzer, one can calculate both the classically defined and the equivalent peak resolution values. With the help of this tool, we demonstrate here that the classical peak resolution values mischaracterize the extent of peak overlap even when the peaks are Gaussian but have different areas. We show that under ideal conditions of the separation process, the relative peak overlap value is easily accessible by fitting the overall peak profile as the sum of two Gaussian functions. The applicability of the new approach is demonstrated on real separations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Closer look at time averages of the logistic map at the edge of chaos
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian
2009-05-01
The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
Ivković, Miloš; Kuceyeski, Amy; Raj, Ashish
2012-01-01
Whole brain weighted connectivity networks were extracted from high resolution diffusion MRI data of 14 healthy volunteers. A statistically robust technique was proposed for the removal of questionable connections. Unlike most previous studies our methods are completely adapted for networks with arbitrary weights. Conventional statistics of these weighted networks were computed and found to be comparable to existing reports. After a robust fitting procedure using multiple parametric distributions it was found that the weighted node degree of our networks is best described by the normal distribution, in contrast to previous reports which have proposed heavy tailed distributions. We show that post-processing of the connectivity weights, such as thresholding, can influence the weighted degree asymptotics. The clustering coefficients were found to be distributed either as gamma or power-law distribution, depending on the formula used. We proposed a new hierarchical graph clustering approach, which revealed that the brain network is divided into a regular base-2 hierarchical tree. Connections within and across this hierarchy were found to be uncommonly ordered. The combined weight of our results supports a hierarchically ordered view of the brain, whose connections have heavy tails, but whose weighted node degrees are comparable. PMID:22761649
Ivković, Miloš; Kuceyeski, Amy; Raj, Ashish
2012-01-01
Whole brain weighted connectivity networks were extracted from high resolution diffusion MRI data of 14 healthy volunteers. A statistically robust technique was proposed for the removal of questionable connections. Unlike most previous studies our methods are completely adapted for networks with arbitrary weights. Conventional statistics of these weighted networks were computed and found to be comparable to existing reports. After a robust fitting procedure using multiple parametric distributions it was found that the weighted node degree of our networks is best described by the normal distribution, in contrast to previous reports which have proposed heavy tailed distributions. We show that post-processing of the connectivity weights, such as thresholding, can influence the weighted degree asymptotics. The clustering coefficients were found to be distributed either as gamma or power-law distribution, depending on the formula used. We proposed a new hierarchical graph clustering approach, which revealed that the brain network is divided into a regular base-2 hierarchical tree. Connections within and across this hierarchy were found to be uncommonly ordered. The combined weight of our results supports a hierarchically ordered view of the brain, whose connections have heavy tails, but whose weighted node degrees are comparable.
Hermite-cosine-Gaussian laser beam and its propagation characteristics in turbulent atmosphere.
Eyyuboğlu, Halil Tanyer
2005-08-01
Hermite-cosine-Gaussian (HcosG) laser beams are studied. The source plane intensity of the HcosG beam is introduced and its dependence on the source parameters is examined. By application of the Fresnel diffraction integral, the average receiver intensity of HcosG beam is formulated for the case of propagation in turbulent atmosphere. The average receiver intensity is seen to reduce appropriately to various special cases. When traveling in turbulence, the HcosG beam initially experiences the merging of neighboring beam lobes, and then a TEM-type cosh-Gaussian beam is formed, temporarily leading to a plain cosh-Gaussian beam. Eventually a pure Gaussian beam results. The numerical evaluation of the normalized beam size along the propagation axis at selected mode indices indicates that relative spreading of higher-order HcosG beam modes is less than that of the lower-order counterparts. Consequently, it is possible at some propagation distances to capture more power by using higher-mode-indexed HcosG beams.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
Multiple jet study data correlations. [data correlation for jet mixing flow of air jets
NASA Technical Reports Server (NTRS)
Walker, R. E.; Eberhardt, R. G.
1975-01-01
Correlations are presented which allow determination of penetration and mixing of multiple cold air jets injected normal to a ducted subsonic heated primary air stream. Correlations were obtained over jet-to-primary stream momentum flux ratios of 6 to 60 for locations from 1 to 30 jet diameters downstream of the injection plane. The range of geometric and operating variables makes the correlations relevant to gas turbine combustors. Correlations were obtained for the mixing efficiency between jets and primary stream using an energy exchange parameter. Also jet centerplane velocity and temperature trajectories were correlated and centerplane dimensionless temperature distributions defined. An assumption of a Gaussian vertical temperature distribution at all stations is shown to result in a reasonable temperature field model. Data are presented which allow comparison of predicted and measured values over the range of conditions specified above.
A description of the pseudorapidity distributions in heavy ion collisions at RHIC and LHC energies
NASA Astrophysics Data System (ADS)
Jiang, Z. J.; Zhang, Y.; Zhang, H. L.; Deng, H. P.
2015-09-01
The charged particles produced in nucleus-nucleus collisions are classified into two parts: One is from the hot and dense matter created in collisions. The other is from leading particles. The hot and dense matter is assumed to expand and generate particles according to BJP hydrodynamics, a theory put forward by A. Bialas, R.A. Janik and R. Peschanski. The leading particles are argued to possess a Gaussian rapidity distribution with the normalization constant equaling the number of participants. A comparison is made between the theoretical results and the experimental measurements performed by BRAHMS and PHOBOS Collaborations at BNL-RHIC in Au-Au and Cu-Cu collisions at √{sNN} = 200 GeV and by ALICE Collaboration at CERN-LHC in Pb-Pb collisions at √{sNN} = 2.76 TeV. The theoretical results are well consistent with experimental data.
State-space receptive fields of semicircular canal afferent neurons in the bullfrog
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
Receptive fields are commonly used to describe spatial characteristics of sensory neuron responses. They can be extended to characterize temporal or dynamical aspects by mapping neural responses in dynamical state spaces. The state-space receptive field of a neuron is the probability distribution of the dynamical state of the stimulus-generating system conditioned upon the occurrence of a spike. We have computed state-space receptive fields for semicircular canal afferent neurons in the bullfrog (Rana catesbeiana). We recorded spike times during broad-band Gaussian noise rotational velocity stimuli, computed the frequency distribution of head states at spike times, and normalized these to obtain conditional pdfs for the state. These state-space receptive fields quantify what the brain can deduce about the dynamical state of the head when a single spike arrives from the periphery. c2001 Elsevier Science B.V. All rights reserved.
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.
Ginzburg, D; Mann, A
2014-03-10
A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.
Kong, Weipeng; Sugita, Atsushi; Taira, Takunori
2012-07-01
We have demonstrated high-order Hermite-Gaussian (HG) mode generation based on 2D gain distribution control edge-pumped, composite all-ceramic Yb:YAG/YAG microchip lasers using a V-type cavity. Several hundred milliwatts to several watts HG(mn) modes are achieved. We also generated different kinds of vortex arrays directly from the oscillator with the same power level. In addition, a more than 7 W doughnut-shape mode can be generated in the same cavity.
Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media
NASA Astrophysics Data System (ADS)
Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen
2018-06-01
A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Aleil, Boris; Meyer, Nicolas; Wolff, Valérie; Kientz, Daniel; Wiesel, Marie-Louise; Gachet, Christian; Cazenave, Jean-Pierre; Lanza, François
2006-10-01
Soluble glycoprotein V (sGPV) is a new plasma marker of thrombosis released from the platelet surface by thrombin. sGPV levels are increased in patients with atherothrombotic diseases, but the determinants of sGPV levels are unknown in the general population. Identification of these potential confounding factors is needed for correct design and analysis of clinical studies on cardiovascular diseases. The aim of this study was to determine the normal range of plasma values and the factors controlling sGPV levels in a population of normal individuals. Three hundred blood donors were recruited at the Etablissement Français du Sang-Alsace for the measurement of plasma levels of sGPV, platelet factor 4 (PF4), thrombin-antithrombin complexes (TAT) and D-dimers. The plasma level of sGPV was (median [interquartile range]) 27.5 [23.5-34.4] microg/l and displayed a Gaussian distribution. sGPV had a lower interindividual coefficient of variation (33%) than PF4 (176%), TAT (87%) or D-dimers (82%). sGPV levels were independent of age and sex but sensitive to red cell (r = 0.412; p < 0.0001) and platelet counts (r = 0.267; p = 0.001), total cholesterol (r = -0.313; p < 0.0001), food intake (r = 0.184; p = 0.0014) and smoking (r = -0.154; p = 0.039). Contrary to PF4 and TAT, sGPV did not differ between venous and arterial blood samples of 12 healthy individuals. Red cell and platelet counts, total cholesterol, current smoking and recent food intake are important determinants of sGPV levels and must be taken into account in clinical studies using sGPV as a thrombosis marker. Normal distribution of sGPV levels in the general population supports its use in clinical applications.
Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix
NASA Astrophysics Data System (ADS)
Saxena, Nupur; Kumar, Pragati; Gupta, Vinay
2015-05-01
Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.
NASA Astrophysics Data System (ADS)
Loikith, P. C.; Neelin, J. D.; Meyerson, J.
2017-12-01
Regions of shorter-than-Gaussian warm and cold side temperature distribution tails are shown to occur in spatially coherent patterns in the current climate. Under such conditions, warming may be manifested in more complex ways than if the underlying distribution were close to Gaussian. For example, under a uniform warm shift, the simplest prototype for future warming, a location with a short warm side tail would experience a greater increase in extreme warm exceedances compared to if the distribution were Gaussian. Similarly, for a location with a short cold side tail, a uniform warm shift would result in a rapid decrease in extreme cold exceedances. Both scenarios carry major societal and environmental implications including but not limited to negative impacts on human and ecosystem health, agriculture, and the economy. It is therefore important for climate models to be able to realistically reproduce short tails in simulations of historical climate in order to boost confidence in projections of future temperature extremes. Overall, climate models contributing to the fifth phase of the Coupled Model Intercomparison Project capture many of the principal observed regions of short tails. This suggests the underlying dynamics and physics occur on scales resolved by the models, and helps build confidence in model projections of extremes. Furthermore, most GCMs show more rapid changes in exceedances of extreme temperature thresholds in regions of short tails. Results therefore suggest that the shape of the tails of the underlying temperature distribution is an indicator of how rapidly a location will experience changes to extreme temperature occurrence under future warming.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
Gaussian theory for spatially distributed self-propelled particles
NASA Astrophysics Data System (ADS)
Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza
2016-12-01
Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.
NASA Astrophysics Data System (ADS)
Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo
2017-05-01
Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much larger number of conditioning piezometric heads, multiple point geostatistics and ERT could improve aquifer characterization. This shows that state of the art stochastic methods need to be supported by abundant and high-quality subsurface data.
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Nanolaser Spectroscopy of Genetically Engineered Yeast: New Tool for a Better Brew?
NASA Astrophysics Data System (ADS)
Gourley, Paul L.; Hendricks, Judy K.; Naviaux, Robert K.; Yaffe, Michael P.
2006-03-01
A basic function of the cell membrane is to selectively uptake ions or molecules from its environment to concentrate them into the interior. This concentration difference results in an osmostic pressure difference across the membrane. Ultimately, this pressure and its fluctuation from cell to cell will be limited by the availability and fluctuations of the solute concentrations in solution, the extent of inter-cell communication, and the state of respiring intracellular mitochondria that fuel the process. To measure these fluctuations, we have employed a high-speed nanolaser technique that samples the osmotic pressure in individual yeast cells and isolated mitochondria. We analyzed 2 yeast cell strains, normal baker’s yeast and a genetically-altered version, that differ only by the presence of mitochondrial DNA. The absence of mitochondrial DNA results in the complete loss of all the mtDNA-encoded proteins and RNAs, and loss of the pigmented, heme-containing cytochromes. These cells have mitochondria, but the mitochondria lack most normal respiratory chain complexes. The frequency distributions in the nanolaser spectra produced by wild-type and modified cells and mitochondria show a striking shift from Gaussian to Poissonian distributions, revealing a powerful novel method for studying statistical physics of yeast.
Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro
We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Propagation and spatiotemporal coupling characteristics of ultra-short Gaussian vortex pulse
NASA Astrophysics Data System (ADS)
Nie, Jianye; Liu, Guodong; Zhang, Rongzhu
2018-05-01
Based on Collins diffraction integral formula, the propagation equation of ultra-short Gaussian vortex pulse beam has been derived. Using the equation, the intensity distribution variations of vortex pulse in the propagation process are calculated. Specially, the spatiotemporal coupling characteristics of ultra-short vortex beams are discussed in detail. The results show that some key parameters, such as transverse distance, transmission distance, pulse width and topological charge number will influence the spatiotemporal coupling characteristics significantly. With the increasing of transverse distance, the waveforms of the pulses distort obviously. And when transmission distance is far than 50 mm, the distribution curve of transverse intensity gradually changes into a Gaussian type. In addition, initial pulse width will affect the distribution of light field, however, when initial pulse width is larger than 3 fs, the spatiotemporal coupling effect will be insignificant. Topological charge number does not affect the time delay characteristics, since with the increasing of topological charge number, the waveform of the pulse distorts gradually but the time delay does not occur.
Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi
2013-01-01
Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.
NASA Astrophysics Data System (ADS)
Belov, A. V.; Kurkov, Andrei S.; Chikolini, A. V.
1990-08-01
An offset method is modified to allow an analysis of the distribution of fields in a single-mode fiber waveguide without recourse to the Gaussian approximation. A new approximation for the field is obtained for fiber waveguides with a step refractive index profile and a special analysis employing the Hankel transformation is applied to waveguides with a distributed refractive index. The field distributions determined by this method are compared with the corresponding distributions calculated from the refractive index of a preform from which the fibers are drawn. It is shown that these new approaches can be used to determine the dimensions of a mode spot defined in different ways and to forecast the dispersion characteristics of single-mode fiber waveguides.
Testing the shape of distributions of weather data
NASA Astrophysics Data System (ADS)
Baccon, Ana L. P.; Lunardi, José T.
2016-08-01
The characterization of the statistical distributions of observed weather data is of crucial importance both for the construction and for the validation of weather models, such as weather generators (WG's). An important class of WG's (e.g., the Richardson-type generators) reduce the time series of each variable to a time series of its residual elements, and the residuals are often assumed to be normally distributed. In this work we propose an approach to investigate if the shape assumed for the distribution of residuals is consistent or not with the observed data of a given site. Specifically, this procedure tests if the same distribution shape for the residuals noise is maintained along the time. The proposed approach is an adaptation to climate time series of a procedure first introduced to test the shapes of distributions of growth rates of business firms aggregated in large panels of short time series. We illustrate the procedure by applying it to the residuals time series of maximum temperature in a given location, and investigate the empirical consistency of two assumptions, namely i) the most common assumption that the distribution of the residuals is Gaussian and ii) that the residuals noise has a time invariant shape which coincides with the empirical distribution of all the residuals noise of the whole time series pooled together.
Wang, Minghao; Yuan, Xiuhua; Ma, Donglin
2017-04-01
Nonuniformly correlated partially coherent beams (PCBs) have extraordinary propagation properties, making it possible to further improve the performance of free-space optical communications. In this paper, a series of PCBs with varying degrees of coherence in the radial direction, academically called radial partially coherent beams (RPCBs), are considered. RPCBs with arbitrary coherence distributions can be created by adjusting the amplitude profile of a spatial modulation function imposed on a uniformly correlated phase screen. Since RPCBs cannot be well characterized by the coherence length, a modulation depth factor is introduced as an indicator of the overall distribution of coherence. By wave optics simulation, free-space and atmospheric propagation properties of RPCBs with (inverse) Gaussian and super-Gaussian coherence distributions are examined in comparison with conventional Gaussian Schell-model beams. Furthermore, the impacts of varying central coherent areas are studied. Simulation results reveal that under comparable overall coherence, beams with a highly coherent core and a less coherent margin exhibit a smaller beam spread and greater on-axis intensity, which is mainly due to the self-focusing phenomenon right after the beam exits the transmitter. Particularly, those RPCBs with super-Gaussian coherence distributions will repeatedly focus during propagation, resulting in even greater intensities. Additionally, RPCBs also have a considerable ability to reduce scintillation. And it is demonstrated that those properties have made RPCBs very effective in improving the mean signal-to-noise ratio of small optical receivers, especially in relatively short, weakly fluctuating links.
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.
Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.
Tracer diffusion in a sea of polymers with binding zones: mobile vs. frozen traps.
Samanta, Nairhita; Chakrabarti, Rajarshi
2016-10-19
We use molecular dynamics simulations to investigate the tracer diffusion in a sea of polymers with specific binding zones for the tracer. These binding zones act as traps. Our simulations show that the tracer can undergo normal yet non-Gaussian diffusion under certain circumstances, e.g., when the polymers with traps are frozen in space and the volume fraction and the binding strength of the traps are moderate. In this case, as the tracer moves, it experiences a heterogeneous environment and exhibits confined continuous time random walk (CTRW) like motion resulting in a non-Gaussian behavior. Also the long time dynamics becomes subdiffusive as the number or the binding strength of the traps increases. However, if the polymers are mobile then the tracer dynamics is Gaussian but could be normal or subdiffusive depending on the number and the binding strength of the traps. In addition, with increasing binding strength and number of polymer traps, the probability of the tracer being trapped increases. On the other hand, removing the binding zones does not result in trapping, even at comparatively high crowding. Our simulations also show that the trapping probability increases with the increasing size of the tracer and for a bigger tracer with the frozen polymer background the dynamics is only weakly non-Gaussian but highly subdiffusive. Our observations are in the same spirit as found in many recent experiments on tracer diffusion in polymeric materials and question the validity of using Gaussian theory to describe diffusion in a crowded environment in general.
On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.
2011-01-01
This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Vipin K.; Sharma, Anamika
2013-05-15
We estimate the ponderomotive force on an expanded inhomogeneous electron density profile, created in the later phase of laser irradiated diamond like ultrathin foil. When ions are uniformly distributed along the plasma slab and electron density obeys the Poisson's equation with space charge potential equal to negative of ponderomotive potential, φ=−φ{sub p}=−(mc{sup 2}/e)(γ−1), where γ=(1+|a|{sup 2}){sup 1/2}, and |a| is the normalized local laser amplitude inside the slab; the net ponderomotive force on the slab per unit area is demonstrated analytically to be equal to radiation pressure force for both overdense and underdense plasmas. In case electron density is takenmore » to be frozen as a Gaussian profile with peak density close to relativistic critical density, the ponderomotive force has non-monotonic spatial variation and sums up on all electrons per unit area to equal radiation pressure force at all laser intensities. The same result is obtained for the case of Gaussian ion density profile and self consistent electron density profile, obeying Poisson's equation with φ=−φ{sub p}.« less
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
Lyu, Siwei; Simoncelli, Eero P.
2011-01-01
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599
Dynamic generation of Ince-Gaussian modes with a digital micromirror device
NASA Astrophysics Data System (ADS)
Ren, Yu-Xuan; Fang, Zhao-Xiang; Gong, Lei; Huang, Kun; Chen, Yue; Lu, Rong-De
2015-04-01
Ince-Gaussian (IG) beam with elliptical profile, as a connection between Hermite-Gaussian (HG) and Laguerre-Gaussian (LG) beams, has showed unique advantages in some applications such as quantum entanglement and optical micromanipulation. However, its dynamic generation with high switching frequency is still challenging. Here, we experimentally reported the quick generation of Ince-Gaussian beam by using a digital micro-mirror device (DMD), which has the highest switching frequency of 5.2 kHz in principle. The configurable properties of DMD allow us to observe the quasi-smooth variation from LG (with ellipticity ɛ = 0 ) to IG and HG ( ɛ = ∞ ) beam. This approach might pave a path to high-speed quantum communication in terms of IG beam. Additionally, the characterized axial plane intensity distribution exhibits a 3D mould potentially being employed for optical micromanipulation.
Non-Gaussian quantum states generation and robust quantum non-Gaussianity via squeezing field
NASA Astrophysics Data System (ADS)
Tang, Xu-Bing; Gao, Fang; Wang, Yao-Xiong; Kuang, Sen; Shuang, Feng
2015-03-01
Recent studies show that quantum non-Gaussian states or using non-Gaussian operations can improve entanglement distillation, quantum swapping, teleportation, and cloning. In this work, employing a strategy of non-Gaussian operations (namely subtracting and adding a single photon), we propose a scheme to generate non-Gaussian quantum states named single-photon-added and -subtracted coherent (SPASC) superposition states by implementing Bell measurements, and then investigate the corresponding nonclassical features. By squeezed the input field, we demonstrate that robustness of non-Gaussianity can be improved. Controllable phase space distribution offers the possibility to approximately generate a displaced coherent superposition states (DCSS). The fidelity can reach up to F ≥ 0.98 and F ≥ 0.90 for size of amplitude z = 1.53 and 2.36, respectively. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203061 and 61074052), the Outstanding Young Talent Foundation of Anhui Province, China (Grant No. 2012SQRL040), and the Natural Science Foundation of Anhui Province, China (Grant No. KJ2012Z035).
Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Steinwolf, Alexander
2005-01-01
The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.
NASA Astrophysics Data System (ADS)
Liu, L.; Neretnieks, I.
Canisters with spent nuclear fuel will be deposited in fractured crystalline rock in the Swedish concept for a final repository. The fractures intersect the canister holes at different angles and they have variable apertures and therefore locally varying flowrates. Our previous model with fractures with a constant aperture and a 90° intersection angle is now extended to arbitrary intersection angles and stochastically variable apertures. It is shown that the previous basic model can be simply amended to account for these effects. More importantly, it has been found that the distributions of the volumetric and the equivalent flow rates are all close to the Normal for both fractal and Gaussian fractures, with the mean of the distribution of the volumetric flow rate being determined solely by the hydraulic aperture, and that of the equivalent flow rate being determined by the mechanical aperture. Moreover, the standard deviation of the volumetric flow rates of the many realizations increases with increasing roughness and spatial correlation length of the aperture field, and so does that of the equivalent flow rates. Thus, two simple statistical relations can be developed to describe the stochastic properties of fluid flow and solute transport through a single fracture with spatially variable apertures. This obviates, then, the need to simulate each fracture that intersects a canister in great detail, and allows the use of complex fractures also in very large fracture network models used in performance assessment.
On the Five-Moment Hamburger Maximum Entropy Reconstruction
NASA Astrophysics Data System (ADS)
Summy, D. P.; Pullin, D. I.
2018-05-01
We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Spatiotemporal modeling of node temperatures in supercomputers
Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...
2016-06-10
Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less
Meinerz, Kelsey; Beeman, Scott C; Duan, Chong; Bretthorst, G Larry; Garbow, Joel R; Ackerman, Joseph J H
2018-01-01
Recently, a number of MRI protocols have been reported that seek to exploit the effect of dissolved oxygen (O 2 , paramagnetic) on the longitudinal 1 H relaxation of tissue water, thus providing image contrast related to tissue oxygen content. However, tissue water relaxation is dependent on a number of mechanisms, and this raises the issue of how best to model the relaxation data. This problem, the model selection problem, occurs in many branches of science and is optimally addressed by Bayesian probability theory. High signal-to-noise, densely sampled, longitudinal 1 H relaxation data were acquired from rat brain in vivo and from a cross-linked bovine serum albumin (xBSA) phantom, a sample that recapitulates the relaxation characteristics of tissue water in vivo . Bayesian-based model selection was applied to a cohort of five competing relaxation models: (i) monoexponential, (ii) stretched-exponential, (iii) biexponential, (iv) Gaussian (normal) R 1 -distribution, and (v) gamma R 1 -distribution. Bayesian joint analysis of multiple replicate datasets revealed that water relaxation of both the xBSA phantom and in vivo rat brain was best described by a biexponential model, while xBSA relaxation datasets truncated to remove evidence of the fast relaxation component were best modeled as a stretched exponential. In all cases, estimated model parameters were compared to the commonly used monoexponential model. Reducing the sampling density of the relaxation data and adding Gaussian-distributed noise served to simulate cases in which the data are acquisition-time or signal-to-noise restricted, respectively. As expected, reducing either the number of data points or the signal-to-noise increases the uncertainty in estimated parameters and, ultimately, reduces support for more complex relaxation models.
Matsuoka, A J; Abbas, P J; Rubinstein, J T; Miller, C A
2000-11-01
Experimental results from humans and animals show that electrically evoked compound action potential (EAP) responses to constant-amplitude pulse train stimulation can demonstrate an alternating pattern, due to the combined effects of highly synchronized responses to electrical stimulation and refractory effects (Wilson et al., 1994). One way to improve signal representation is to reduce the level of across-fiber synchrony and hence, the level of the amplitude alternation. To accomplish this goal, we have examined EAP responses in the presence of Gaussian noise added to the pulse train stimulus. Addition of Gaussian noise at a level approximately -30 dB relative to EAP threshold to the pulse trains decreased the amount of alternation, indicating that stochastic resonance may be induced in the auditory nerve. The use of some type of conditioning stimulus such as Gaussian noise may provide a more 'normal' neural response pattern.
Bernatas, J J; Mohamed Ali, I; Ali Ismaël, H; Barreh Matan, A
2008-12-01
The purpose of this report was to describe a tuberculin survey conducted in 2001 to assess the trend in the annual risk for tuberculosis infection in Djibouti and compare resulting data with those obtained in a previous survey conducted in 1994. In 2001 cluster sampling allowed selection of 5599 school children between the ages of 6 and 10 years including 31.2% (1747/5599) without BCG vaccination scar. In this sample the annual risk of infection (ARI) estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (18 mm) determined according to the "mirror" method were 4.67%, 3.64%, 3.19% and 2.66% respectively. The distribution of positive tuberculin skin reaction size was significantly different from the normal law. In 1994 a total of 5257 children were selected using the same method. The distribution of positive reactions was not significantly different from the gaussian distribution and 28.6% (1505/5257) did not have a BCG scar. The ARI estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (17 mm) determined according to the "mirror" method were 2.68%, 2.52%, 2.75% and 3.32 respectively. Tuberculin skin reaction size among positive skin test reactors was correlated with the presence of a BCG scar, and its mean was significantly higher among children with BCG scar. The proportion of positive skin test reactors was also higher in the BCG scar group regardless of the cutoff point selected. Comparison of prevalence rates and ARI values did not allow any clear conclusion to be drawn, mainly because of a drastic difference in the positive reaction distribution profiles between the two studies. The distribution of the skin test reaction's size 1994 study could be modelized by a gaussian distribution while it could not in 2001. A partial explanation for the positive reaction distribution observed in the 2001 study might be the existence of cross-reactions with environmental mycobacteria.
Compensation of Gaussian curvature in developable cones is local
NASA Astrophysics Data System (ADS)
Wang, Jin W.; Witten, Thomas A.
2009-10-01
We use the angular deficit scheme [V. Borrelli, F. Cazals, and J.-M. Morvan, Comput. Aided Geom. Des. 20, 319 (2003)] to determine the distribution of Gaussian curvature in developable cones (d-cones) [E. Cerda, S. Chaieb, F. Melo, and L. Mahadevan, Nature (London) 401, 46 (1999)] numerically. These d-cones are formed by pushing a thin elastic sheet into a circular container. Negative Gaussian curvatures are identified at the rim where the sheet touches the container. Around the rim there are two narrow bands with positive Gaussian curvatures. The integral of the (negative) Gaussian curvature near the rim is almost completely compensated by that of the two adjacent bands. This suggests that the Gauss-Bonnet theorem which constrains the integral of Gaussian curvature globally does not explain the spontaneous curvature cancellation phenomenon [T. Liang and T. A. Witten, Phys. Rev. E 73, 046604 (2006)]. The locality of the compensation seems to increase for decreasing d-cone thickness. The angular deficit scheme also provides a way to confirm the curvature cancellation phenomenon.