Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams
NASA Technical Reports Server (NTRS)
Steely, Sidney L.
1993-01-01
The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
NASA Astrophysics Data System (ADS)
Selim, M. M.; Bezák, V.
2003-06-01
The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Li, Ye; Yu, Lin; Zhang, Yixin
2017-05-29
Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Kurtosis, skewness, and non-Gaussian cosmological density perturbations
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1993-01-01
Cosmological topological defects as well as some nonstandard inflation models can give rise to non-Gaussian density perturbations. Skewness and kurtosis are the third and fourth moments that measure the deviation of a distribution from a Gaussian. Measurement of these moments for the cosmological density field and for the microwave background temperature anisotropy can provide a test of the Gaussian nature of the primordial fluctuation spectrum. In the case of the density field, the importance of measuring the kurtosis is stressed since it will be preserved through the weakly nonlinear gravitational evolution epoch. Current constraints on skewness and kurtosis of primeval perturbations are obtained from the observed density contrast on small scales and from recent COBE observations of temperature anisotropies on large scales. It is also shown how, in principle, future microwave anisotropy experiments might be able to reveal the initial skewness and kurtosis. It is shown that present data argue that if the initial spectrum is adiabatic, then it is probably Gaussian, but non-Gaussian isocurvature fluctuations are still allowed, and these are what topological defects provide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise
NASA Astrophysics Data System (ADS)
Artyushenko, V. M.; Volovach, V. I.
2018-01-01
We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.
USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.
Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J
2018-03-15
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Annular wave packets at Dirac points in graphene and their probability-density oscillation.
Luo, Ji; Valencia, Daniel; Lu, Junqiang
2011-12-14
Wave packets in graphene whose central wave vector is at Dirac points are investigated by numerical calculations. Starting from an initial Gaussian function, these wave packets form into annular peaks that propagate to all directions like ripple-rings on water surface. At the beginning, electronic probability alternates between the central peak and the ripple-rings and transient oscillation occurs at the center. As time increases, the ripple-rings propagate at the fixed Fermi speed, and their widths remain unchanged. The axial symmetry of the energy dispersion leads to the circular symmetry of the wave packets. The fixed speed and widths, however, are attributed to the linearity of the energy dispersion. Interference between states that, respectively, belong to two branches of the energy dispersion leads to multiple ripple-rings and the probability-density oscillation. In a magnetic field, annular wave packets become confined and no longer propagate to infinity. If the initial Gaussian width differs greatly from the magnetic length, expanding and shrinking ripple-rings form and disappear alternatively in a limited spread, and the wave packet resumes the Gaussian form frequently. The probability thus oscillates persistently between the central peak and the ripple-rings. If the initial Gaussian width is close to the magnetic length, the wave packet retains the Gaussian form and its height and width oscillate with a period determined by the first Landau energy. The wave-packet evolution is determined jointly by the initial state and the magnetic field, through the electronic structure of graphene in a magnetic field. © 2011 American Institute of Physics
The statistics of peaks of Gaussian random fields. [cosmological density fluctuations
NASA Technical Reports Server (NTRS)
Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.
1986-01-01
A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive detection of noise signal according to Neumann-Pearson criterion
NASA Astrophysics Data System (ADS)
Padiryakov, Y. A.
1985-03-01
Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.
Non-Gaussianity in a quasiclassical electronic circuit
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi J.; Hayakawa, Hisao
2017-05-01
We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.
Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation
NASA Technical Reports Server (NTRS)
Steinwolf, Alexander; Rizzi, Stephen A.
2003-01-01
The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.
Koga, D; Chian, A C-L; Miranda, R A; Rempel, E L
2007-04-01
The link between phase coherence and non-Gaussian statistics is investigated using magnetic field data observed in the solar wind turbulence near the Earth's bow shock. The phase coherence index Cphi, which characterizes the degree of phase correlation (i.e., nonlinear wave-wave interactions) among scales, displays a behavior similar to kurtosis and reflects a departure from Gaussianity in the probability density functions of magnetic field fluctuations. This demonstrates that nonlinear interactions among scales are the origin of intermittency in the magnetic field turbulence.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Analysis of low altitude atmospheric turbulence data measured in flight
NASA Technical Reports Server (NTRS)
Ganzer, V. M.; Joppa, R. G.; Vanderwees, G.
1977-01-01
All three components of turbulence were measured simultaneously in flight at each wing tip of a Beech D-18 aircraft. The flights were conducted at low altitude, 30.5 - 61.0 meters (100-200 ft.), over water in the presence of wind driven turbulence. Statistical properties of flight measured turbulence were compared with Gaussian and non-Gaussian turbulence models. Spatial characteristics of the turbulence were analyzed using the data from flight perpendicular and parallel to the wind. The probability density distributions of the vertical gusts show distinctly non-Gaussian characteristics. The distributions of the longitudinal and lateral gusts are generally Gaussian. The power spectra compare in the inertial subrange at some points better with the Dryden spectrum, while at other points the von Karman spectrum is a better approximation. In the low frequency range the data show peaks or dips in the power spectral density. The cross between vertical gusts in the direction of the mean wind were compared with a matched non-Gaussian model. The real component of the cross spectrum is in general close to the non-Gaussian model. The imaginary component, however, indicated a larger phase shift between these two gust components than was found in previous research.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Fluctuations and intermittent poloidal transport in a simple toroidal plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goud, T. S.; Ganesh, R.; Saxena, Y. C.
In a simple magnetized toroidal plasma, fluctuation induced poloidal flux is found to be significant in magnitude. The probability distribution function of the fluctuation induced poloidal flux is observed to be strongly non-Gaussian in nature; however, in some cases, the distribution shows good agreement with the analytical form [Carreras et al., Phys. Plasmas 3, 2664 (1996)], assuming a coupling between the near Gaussian density and poloidal velocity fluctuations. The observed non-Gaussian nature of the fluctuation induced poloidal flux and other plasma parameters such as density and fluctuating poloidal velocity in this device is due to intermittent and bursty nature ofmore » poloidal transport. In the simple magnetized torus used here, such an intermittent fluctuation induced poloidal flux is found to play a crucial role in generating the poloidal flow.« less
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Statistics of Advective Stretching in Three-dimensional Incompressible Flows
NASA Astrophysics Data System (ADS)
Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.
2009-09-01
We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Monfared, Yashar E.; Ponomarenko, Sergey A.
2017-10-01
We explore theoretically and numerically extreme event excitation in stimulated Raman scattering in gases. We consider gas-filled hollow-core photonic crystal fibers as a particular system realization. We show that moderate amplitude pump fluctuations obeying Gaussian statistics lead to the emergence of heavy-tailed non-Gaussian statistics as coherent seed Stokes pulses are amplified on propagation along the fiber. We reveal the crucial role that coherent memory effects play in causing non-Gaussian statistics of the system. We discover that extreme events can occur even at the initial stage of stimulated Raman scattering when one can neglect energy depletion of an intense, strongly fluctuating Gaussian pump source. Our analytical results in the undepleted pump approximation explicitly illustrate power-law probability density generation as the input pump noise is transferred to the output Stokes pulses.
Mean Field Approach to the Giant Wormhole Problem
NASA Astrophysics Data System (ADS)
Gamba, A.; Kolokolov, I.; Martellini, M.
We introduce a gaussian probability density for the space-time distribution of worm-holes, thus taking effectively into account wormhole interaction. Using a mean-field approximation for the free energy, we show that giant wormholes are probabilistically suppressed in a homogenous isotropic “large” universe.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Stochastic characteristics and Second Law violations of atomic fluids in Couette flow
NASA Astrophysics Data System (ADS)
Raghavan, Bharath V.; Karimi, Pouyan; Ostoja-Starzewski, Martin
2018-04-01
Using Non-equilibrium Molecular Dynamics (NEMD) simulations, we study the statistical properties of an atomic fluid undergoing planar Couette flow, in which particles interact via a Lennard-Jones potential. We draw a connection between local density contrast and temporal fluctuations in the shear stress, which arise naturally through the equivalence between the dissipation function and entropy production according to the fluctuation theorem. We focus on the shear stress and the spatio-temporal density fluctuations and study the autocorrelations and spectral densities of the shear stress. The bispectral density of the shear stress is used to measure the degree of departure from a Gaussian model and the degree of nonlinearity induced in the system owing to the applied strain rate. More evidence is provided by the probability density function of the shear stress. We use the Information Theory to account for the departure from Gaussian statistics and to develop a more general probability distribution function that captures this broad range of effects. By accounting for negative shear stress increments, we show how this distribution preserves the violations of the Second Law of Thermodynamics observed in planar Couette flow of atomic fluids, and also how it captures the non-Gaussian nature of the system by allowing for non-zero higher moments. We also demonstrate how the temperature affects the band-width of the shear-stress and how the density affects its Power Spectral Density, thus determining the conditions under which the shear-stress acts is a narrow-band or wide-band random process. We show that changes in the statistical characteristics of the parameters of interest occur at a critical strain rate at which an ordering transition occurs in the fluid causing shear thinning and affecting its stability. A critical strain rate of this kind is also predicted by the Loose-Hess stability criterion.
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
A non-gaussian model of continuous atmospheric turbulence for use in aircraft design
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Joppa, R. G.; Ganzer, V. M.
1976-01-01
A non-Gaussian model of atmospheric turbulence is presented and analyzed. The model is restricted to the regions of the atmosphere where the turbulence is steady or continuous, and the assumptions of homogeneity and stationarity are justified. Also spatial distribution of turbulence is neglected, so the model consists of three independent, stationary stochastic processes which represent the vertical, lateral, and longitudinal gust components. The non-Gaussian and Gaussian models are compared with experimental data, and it is shown that the Gaussian model underestimates the number of high velocity gusts which occur in the atmosphere, while the non-Gaussian model can be adjusted to match the observed high velocity gusts more satisfactorily. Application of the proposed model to aircraft response is investigated, with particular attention to the response power spectral density, the probability distribution, and the level crossing frequency. A numerical example is presented which illustrates the application of the non-Gaussian model to the study of an aircraft autopilot system. Listings and sample results of a number of computer programs used in working with the model are included.
Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Steinwolf, Alexander
2005-01-01
The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
Inflation in random Gaussian landscapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu
2017-05-01
We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Diffusion of active chiral particles
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.
2016-12-01
The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.
Stable Lévy motion with inverse Gaussian subordinator
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Gajda, J.
2017-09-01
In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.
NASA Astrophysics Data System (ADS)
Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin
2018-01-01
The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.
Basis adaptation in homogeneous chaos spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Ghanem, Roger
2014-02-01
We present a new meth for the characterization of subspaces associated with low-dimensional quantities of interet (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.
2016-12-01
Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
The 6dFGS Peculiar Velocity Field
NASA Astrophysics Data System (ADS)
Springob, Chris M.; Magoulas, C.; Colless, M.; Mould, J.; Erdogdu, P.; Jones, D. H.; Lucey, J.; Campbell, L.; Merson, A.; Jarrett, T.
2012-01-01
The 6dF Galaxy Survey (6dFGS) is an all southern sky galaxy survey, including 125,000 redshifts and a Fundamental Plane (FP) subsample of 10,000 peculiar velocities, making it the largest peculiar velocity sample to date. We have fit the FP using a maximum likelihood fit to a tri-variate Gaussian. We subsequently compute a Bayesian probability distribution for every possible peculiar velocity for each of the 10,000 galaxies, derived from the tri-variate Gaussian probability density distribution, accounting for our selection effects and measurement errors. We construct a predicted peculiar velocity field from the 2MASS redshift survey, and compare our observed 6dFGS velocity field to the predicted field. We discuss the resulting agreement between the observed and predicted fields, and the implications for measurements of the bias parameter and bulk flow.
Criticality and Phase Transition in Stock-Price Fluctuations
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2006-02-01
We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.
NASA Astrophysics Data System (ADS)
Klimenko, V. V.
2017-12-01
We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.
1980-01-01
Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Timescales of isotropic and anisotropic cluster collapse
NASA Astrophysics Data System (ADS)
Bartelmann, M.; Ehlers, J.; Schneider, P.
1993-12-01
From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Bourlier, Christophe
2005-07-10
The emissivity of two-dimensional anisotropic rough sea surfaces with non-Gaussian statistics is investigated. The emissivity derivation is of importance for retrieval of the sea-surface temperature or equivalent temperature of a rough sea surface by infrared thermal imaging. The well-known Cox-Munk slope probability-density function, considered non-Gaussian, is used for the emissivity derivation, in which the skewness and the kurtosis (related to the third- and fourth-order statistics, respectively) are included. The shadowing effect, which is significant for grazing angles, is also taken into account. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In addition, multiple reflections are ignored. Numerical results of the emissivity are presented for Gaussian and non-Gaussian statistics, for moderate wind speeds, for near-infrared wavelengths, for emission angles ranging from 0 degrees (nadir) to 90 degrees (horizon), and according to the wind direction. In addition, the emissivity is compared with both measurements and a Monte Carlo ray-tracing method.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
Naik, Ganesh R; Kumar, Dinesh K
2011-01-01
The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
NASA Astrophysics Data System (ADS)
Waubke, Holger; Kasess, Christian H.
2016-11-01
Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Weyer-Menkhoff, I; Thrun, M C; Lötsch, J
2018-05-01
Pain in response to noxious cold has a complex molecular background probably involving several types of sensors. A recent observation has been the multimodal distribution of human cold pain thresholds. This study aimed at analysing reproducibility and stability of this observation and further exploration of data patterns supporting a complex background. Pain thresholds to noxious cold stimuli (range 32-0 °C, tonic: temperature decrease -1 °C/s, phasic: temperature decrease -8 °C/s) were acquired in 148 healthy volunteers. The probability density distribution was analysed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM), emergent self-organizing maps and self-organizing swarms of data agents. The probability density function of pain responses was trimodal (mean thresholds at 25.9, 18.4 and 8.0 °C for tonic and 24.5, 18.1 and 7.5 °C for phasic stimuli). Subjects' association with Gaussian modes was consistent between both types of stimuli (weighted Cohen's κ = 0.91). Patterns emerging in self-organizing neuronal maps and swarms could be associated with different trends towards decreasing cold pain sensitivity in different Gaussian modes. On self-organizing maps, the third Gaussian mode emerged as particularly distinct. Thresholds at, roughly, 25 and 18 °C agree with known working temperatures of TRPM8 and TRPA1 ion channels, respectively, and hint at relative local dominance of either channel in respective subjects. Data patterns suggest involvement of further distinct mechanisms in cold pain perception at lower temperatures. Findings support data science approaches to identify biologically plausible hints at complex molecular mechanisms underlying human pain phenotypes. Sensitivity to pain is heterogeneous. Data-driven computational research approaches allow the identification of subgroups of subjects with a distinct pattern of sensitivity to cold stimuli. The subgroups are reproducible with different types of noxious cold stimuli. Subgroups show pattern that hints at distinct and inter-individually different types of the underlying molecular background. © 2018 European Pain Federation - EFIC®.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Statistical properties of two sine waves in Gaussian noise.
NASA Technical Reports Server (NTRS)
Esposito, R.; Wilson, L. R.
1973-01-01
A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).
Synthesis of generalized surface plasmon beams
NASA Astrophysics Data System (ADS)
Martinez-Niconoff, G.; Munoz-Lopez, J.; Martinez-Vara, P.
2009-08-01
Surface plasmon modes can be considered as the analogous to plane waves for homogeneous media. The extension to partially coherent surface plasmon beams is obtained by means of the incoherent superposition of the interference between surface plasmon modes whose profile is controlled associating a probability density function to the structural parameters implicit in their representation. We show computational simulations for cosine, Bessel, gaussian and dark hollow surface plasmon beams.
Leukocyte Recognition Using EM-Algorithm
NASA Astrophysics Data System (ADS)
Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.
This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.
What distinguishes individual stocks from the index?
NASA Astrophysics Data System (ADS)
Wagner, F.; Milaković, M.; Alfarano, S.
2010-01-01
Stochastic volatility models decompose the time series of financial returns into the product of a volatility factor and an iid noise factor. Assuming a slow dynamic for the volatility factor, we show via nonparametric tests that both the index as well as its individual stocks share a common volatility factor. While the noise component is Gaussian for the index, individual stock returns turn out to require a leptokurtic noise. Thus we propose a two-component model for stocks, given by the sum of Gaussian noise, which reflects market-wide fluctuations, and Laplacian noise, which incorporates firm-specific factors such as firm profitability or growth performance, both of which are known to be Laplacian distributed. In the case of purely Gaussian noise, the chi-squared probability for the density of individual stock returns is typically on the order of 10-20, while it increases to values of O(1) by adding the Laplace component.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Detection methods for non-Gaussian gravitational wave stochastic backgrounds
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2003-04-01
A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.
NASA Astrophysics Data System (ADS)
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.
Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.
2012-01-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702
Crossing statistics of laser light scattered through a nanofluid.
Arshadi Pirlar, M; Movahed, S M S; Razzaghi, D; Karimzadeh, R
2017-09-01
In this paper, we investigate the crossing statistics of speckle patterns formed in the Fresnel diffraction region by a laser beam scattering through a nanofluid. We extend zero-crossing statistics to assess the dynamical properties of the nanofluid. According to the joint probability density function of laser beam fluctuation and its time derivative, the theoretical frameworks for Gaussian and non-Gaussian regimes are revisited. We count the number of crossings not only at zero level but also for all available thresholds to determine the average speed of moving particles. Using a probabilistic framework in determining crossing statistics, a priori Gaussianity is not essentially considered; therefore, even in the presence of deviation from Gaussian fluctuation, this modified approach is capable of computing relevant quantities, such as mean value of speed, more precisely. Generalized total crossing, which represents the weighted summation of crossings for all thresholds to quantify small deviation from Gaussian statistics, is introduced. This criterion can also manipulate the contribution of noises and trends to infer reliable physical quantities. The characteristic time scale for having successive crossings at a given threshold is defined. In our experimental setup, we find that increasing sample temperature leads to more consistency between Gaussian and perturbative non-Gaussian predictions. The maximum number of crossings does not necessarily occur at mean level, indicating that we should take into account other levels in addition to zero level to achieve more accurate assessments.
Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions
NASA Astrophysics Data System (ADS)
Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.
2015-12-01
Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.
Capacity and optimal collusion attack channels for Gaussian fingerprinting games
NASA Astrophysics Data System (ADS)
Wang, Ying; Moulin, Pierre
2007-02-01
In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.
Najafi, M N; Nezhadhaghighi, M Ghasemi
2017-03-01
We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.
NASA Astrophysics Data System (ADS)
Troncossi, M.; Di Sante, R.; Rivola, A.
2016-10-01
In the field of vibration qualification testing, random excitations are typically imposed on the tested system in terms of a power spectral density (PSD) profile. This is the one of the most popular ways to control the shaker or slip table for durability tests. However, these excitations (and the corresponding system responses) exhibit a Gaussian probability distribution, whereas not all real-life excitations are Gaussian, causing the response to be also non-Gaussian. In order to introduce non-Gaussian peaks, a further parameter, i.e., kurtosis, has to be controlled in addition to the PSD. However, depending on the specimen behaviour and input signal characteristics, the use of non-Gaussian excitations with high kurtosis and a given PSD does not automatically imply a non-Gaussian stress response. For an experimental investigation of these coupled features, suitable measurement methods need to be developed in order to estimate the stress amplitude response at critical failure locations and consequently evaluate the input signals most representative for real-life, non-Gaussian excitations. In this paper, a simple test rig with a notched cantilevered specimen was developed to measure the response and examine the kurtosis values in the case of stationary Gaussian, stationary non-Gaussian, and burst non-Gaussian excitation signals. The laser Doppler vibrometry technique was used in this type of test for the first time, in order to estimate the specimen stress amplitude response as proportional to the differential displacement measured at the notch section ends. A method based on the use of measurements using accelerometers to correct for the occasional signal dropouts occurring during the experiment is described. The results demonstrate the ability of the test procedure to evaluate the output signal features and therefore to select the most appropriate input signal for the fatigue test.
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers
1989-05-01
suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
Dynamics of a Landau-Zener non-dissipative system with fluctuating energy levels
NASA Astrophysics Data System (ADS)
Fai, L. C.; Diffo, J. T.; Ateuafack, M. E.; Tchoffo, M.; Fouokeng, G. C.
2014-12-01
This paper considers a Landau-Zener (two-level) system influenced by a three-dimensional Gaussian and non-Gaussian coloured noise and finds a general form of the time dependent diabatic quantum bit (qubit) flip transition probabilities in the fast, intermediate and slow noise limits. The qubit flip probability is observed to mimic (for low-frequencies noise) that of the standard LZ problem. The qubit flip probability is also observed to be the measure of quantum coherence of states. The transition probability is observed to be tailored by non-Gaussian low-frequency noise and otherwise by Gaussian low-frequency coloured noise. Intermediate and fast noise limits are observed to alter the memory of the system in time and found to improve and control quantum information processing.
MAI statistics estimation and analysis in a DS-CDMA system
NASA Astrophysics Data System (ADS)
Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.
2018-05-01
A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.
NASA Astrophysics Data System (ADS)
Bugaev, Edgar; Klimai, Peter
2012-05-01
We consider the process of primordial black hole (PBH) formation originated from primordial curvature perturbations produced during waterfall transition (with tachyonic instability), at the end of hybrid inflation. It is known that in such inflation models, rather large values of curvature perturbation amplitudes can be reached, which can potentially cause a significant PBH production in the early Universe. The probability distributions of density perturbation amplitudes in this case can be strongly non-Gaussian, which requires a special treatment. We calculated PBH abundances and PBH mass spectra for the model and analyzed their dependence on model parameters. We obtained the constraints on the parameters of the inflationary potential, using the available limits on βPBH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zang, L., E-mail: l-zang@center.iae.kyoto-u.ac.jp; Kasajima, K.; Hashimoto, K.
Edge fluctuation in a supersonic molecular-beam injection (SMBI) fueled plasma has been measured using an electrostatic probe array. After SMBI, the plasma stored energy (W{sub p}) temporarily decreased then started to increase. The local plasma fluctuation and fluctuation induced particle transport before and after SMBI have been analyzed. In a short duration (∼4 ms) just after SMBI, the density fluctuation of broad-band low frequency increased, and the probability density function (PDF) changed from a nearly Gaussian to a positively skewed non-Gaussian one. This suggests that intermittent structures were produced due to SMBI. Also the fluctuation induced particle transport was greatly enhancedmore » during this short duration. About 4 ms after SMBI, the low frequency broad-band density fluctuation decreased, and the PDF returned to a nearly Gaussian shape. Also the fluctuation induced particle transport was reduced. Compared with conventional gas puff, W{sub p} degradation window is very short due to the short injection period of SMBI. After this short degradation window, fluctuation induced particle transport was reduced and W{sub p} started the climbing phase. Therefore, the short period of the influence to the edge fluctuation might be an advantage of this novel fueling technique. On the other hand, although their roles are not identified at present, coherent MHD modes are also suppressed as well by the application of SMBI. These MHD modes are thought to be de-exited due to a sudden change of the edge density and/or excitation conditions.« less
The effect of unresolved contaminant stars on the cross-matching of photometric catalogues
NASA Astrophysics Data System (ADS)
Wilson, Tom J.; Naylor, Tim
2017-07-01
A fundamental process in astrophysics is the matching of two photometric catalogues. It is crucial that the correct objects be paired, and that their photometry does not suffer from any spurious additional flux. We compare the positions of sources in Wide-field Infrared Survey Explorer (WISE), INT Photometric H α Survey, Two Micron All Sky Survey and AAVSO Photometric All Sky Survey with Gaia Data Release 1 astrometric positions. We find that the separations are described by a combination of a Gaussian distribution, wider than naively assumed based on their quoted uncertainties, and a large wing, which some authors ascribe to proper motions. We show that this is caused by flux contamination from blended stars not treated separately. We provide linear fits between the quoted Gaussian uncertainty and the core fit to the separation distributions. We show that at least one in three of the stars in the faint half of a given catalogue will suffer from flux contamination above the 1 per cent level when the density of catalogue objects per point spread function area is above approximately 0.005. This has important implications for the creation of composite catalogues. It is important for any closest neighbour matches as there will be a given fraction of matches that are flux contaminated, while some matches will be missed due to significant astrometric perturbation by faint contaminants. In the case of probability-based matching, this contamination affects the probability density function of matches as a function of distance. This effect results in up to 50 per cent fewer counterparts being returned as matches, assuming Gaussian astrometric uncertainties for WISE-Gaia matching in crowded Galactic plane regions, compared with a closest neighbour match.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
Fusion of Imaging and Inertial Sensors for Navigation
2006-09-01
combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random
Prediction of the turbulent wake with second-order closure
NASA Technical Reports Server (NTRS)
Taulbee, D. B.; Lumley, J. L.
1981-01-01
A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor
2017-10-27
Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.
NASA Astrophysics Data System (ADS)
Chabdarov, Shamil M.; Nadeev, Adel F.; Chickrin, Dmitry E.; Faizullin, Rashid R.
2011-04-01
In this paper we discuss unconventional detection technique also known as «full resolution receiver». This receiver uses Gaussian probability mixtures for interference structure adaptation. Full resolution receiver is alternative to conventional matched filter receivers in the case of non-Gaussian interferences. For the DS-CDMA forward channel with presence of complex interferences sufficient performance increasing was shown.
Gaussian vs non-Gaussian turbulence: impact on wind turbine loads
NASA Astrophysics Data System (ADS)
Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.
2014-12-01
In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.
Non-Gaussian behavior in jamming / unjamming transition in dense granular materials
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.
2013-06-01
Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Hydrodynamic Flow Fluctuations in √sNN = 5:02 TeV PbPbCollisions
NASA Astrophysics Data System (ADS)
Castle, James R.
The collective, anisotropic expansion of the medium created in ultrarelativistic heavy-ion collisions, known as flow, is characterized through a Fourier expansion of the final-state azimuthal particle density. In the Fourier expansion, flow harmonic coefficients vn correspond to shape components in the final-state particle density, which are a consequence of similar spatial anisotropies in the initial-state transverse energy density of a collision. Flow harmonic fluctuations are studied for PbPb collisions at √sNN = 5.02 TeV using the CMS detector at the CERN LHC. Flow harmonic probability distributions p( vn) are obtained using particles with 0.3 < pT < 3.0 GeV/c and ∥eta∥ < 1.0 by removing finite-multiplicity resolution effects from the observed azimuthal particle density through an unfolding procedure. Cumulant elliptic flow harmonics (n = 2) are determined from the moments of the unfolded p(v2) distributions and used to construct observables in 5% wide centrality bins up to 60% that relate to the initial-state spatial anisotropy. Hydrodynamic models predict that fluctuations in the initial-state transverse energy density will lead to a non-Gaussian component in the elliptic flow probability distributions that manifests as a negative skewness. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between the higher-order cumulant elliptic flow harmonics. The unfolded p (v2) distributions are transformed assuming a linear relationship between the initial-state spatial anisotropy and final-state flow and are fitted with elliptic power law and Bessel Gaussian parametrizations to infer information on the nature of initial-state fluctuations. The elliptic power law parametrization is found to provide a more accurate description of the fluctuations than the Bessel-Gaussian parametrization. In addition, the event-shape engineering technique, where events are further divided into classes based on an observed ellipticity, is used to study fluctuation-driven differences in the initial-state spatial anisotropy for a given collision centrality that would otherwise be destroyed by event-averaging techniques. Correlations between the first and second moments of p( vn) distributions and event ellipticity are measured for harmonic orders n = 2 - 4 by coupling event-shape engineering to the unfolding technique.
2018-01-01
statistical moments of order 2, 3, and 4. The probability density function (PDF) of the vibrational time series of a good bearing has a Gaussian...ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...when it is no longer needed. Do not return it to the originator. ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
NASA Astrophysics Data System (ADS)
Alimi, Jean-Michel; de Fromont, Paul
2018-04-01
The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.
Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology
NASA Astrophysics Data System (ADS)
Wang, Xin; Szalay, Alex
2016-03-01
The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.
STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca
The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
A new probability distribution model of turbulent irradiance based on Born perturbation theory
NASA Astrophysics Data System (ADS)
Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo
2010-10-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Clerkin, L.; Kirk, D.; Manera, M.; ...
2016-08-30
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less
NASA Astrophysics Data System (ADS)
Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.
2017-04-01
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
NASA Astrophysics Data System (ADS)
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1993-01-01
One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.
Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.
2016-06-07
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.
Performance of synchronous optical receivers using atmospheric compensation techniques.
Belmonte, Aniceto; Khan, Joseph
2008-09-01
We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
Some New Twists to Problems Involving the Gaussian Probability Integral
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1997-01-01
Using an alternate form of the Gaussian probability integral discovered a number of years ago, it is shown that the solution to a number of previously considered communication problems can be simplified and in some cases made more accurate(i.e., exact rather than bounded).
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
New stochastic approach for extreme response of slow drift motion of moored floating structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Shunji; Okazaki, Takashi
1995-12-31
A new stochastic method for investigating the flow drift response statistics of moored floating structures is described. Assuming that wave drift excitation process can be driven by a Gaussian white noise process, an exact stochastic equation governing a time evolution of the response Probability Density Function (PDF) is derived on a basis of Projection operator technique in the field of statistical physics. In order to get an approximate solution of the GFP equation, the authors develop the renormalized perturbation technique which is a kind of singular perturbation methods and solve the GFP equation taken into account up to third ordermore » moments of a non-Gaussian excitation. As an example of the present method, a closed form of the joint PDF is derived for linear response in surge motion subjected to a non-Gaussian wave drift excitation and it is represented by the product of a form factor and the quasi-Cauchy PDFs. In this case, the motion displacement and velocity processes are not mutually independent if the excitation process has a significant third order moment. From a comparison between the response PDF by the present solution and the exact one derived by Naess, it is found that the present solution is effective for calculating both the response PDF and the joint PDF. Furthermore it is shown that the displacement-velocity independence is satisfied if the damping coefficient in equation of motion is not so large and that both the non-Gaussian property of excitation and the damping coefficient should be taken into account for estimating the probability exceedance of the response.« less
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Higher-order cumulants and spectral kurtosis for early detection of subterranean termites
NASA Astrophysics Data System (ADS)
de la Rosa, Juan José González; Moreno Muñoz, Antonio
2008-02-01
This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.
Orbital angular momentum mode of Gaussian beam induced by atmospheric turbulence
NASA Astrophysics Data System (ADS)
Cheng, Mingjian; Guo, Lixin; Li, Jiangting; Yan, Xu; Dong, Kangjun
2018-02-01
Superposition theory of the spiral harmonics is employed to numerical study the transmission property of the orbital angular momentum (OAM) mode of Gaussian beam induced by atmospheric turbulence. Results show that Gauss beam does not carry OAM at the source, but various OAM modes appear after affected by atmospheric turbulence. With the increase of atmospheric turbulence strength, the smaller order OAM modes appear firstly, followed by larger order OAM modes. The beam spreading of Gauss beams in the atmosphere enhance with the increasing topological charge of the OAM modes caused by atmospheric turbulence. The mode probability density of the OAM generated by atmospheric turbulence decreases, and peak position gradually deviate from the Gauss beam spot center with the increase of the topological charge. Our results may be useful for improving the performance of long distance laser digital spiral imaging system.
Zeng, Yuehua
2018-01-01
The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezák, Viktor, E-mail: bezak@fmph.uniba.sk
Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less
NASA Astrophysics Data System (ADS)
Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.
2018-05-01
In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum
2017-12-01
Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.
On the robustness of the q-Gaussian family
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio; Rodríguez, Antonio; Tsallis, Constantino
2015-12-01
We introduce three deformations, called α-, β- and γ-deformation respectively, of a N-body probabilistic model, first proposed by Rodríguez et al. (2008), having q-Gaussians as N → ∞ limiting probability distributions. The proposed α- and β-deformations are asymptotically scale-invariant, whereas the γ-deformation is not. We prove that, for both α- and β-deformations, the resulting deformed triangles still have q-Gaussians as limiting distributions, with a value of q independent (dependent) on the deformation parameter in the α-case (β-case). In contrast, the γ-case, where we have used the celebrated Q-numbers and the Gauss binomial coefficients, yields other limiting probability distribution functions, outside the q-Gaussian family. These results suggest that scale-invariance might play an important role regarding the robustness of the q-Gaussian family.
Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.
Lin, Lanny; Goodrich, Michael A
2014-12-01
During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
Non-Gaussian statistics of soliton timing jitter induced by amplifier noise.
Ho, Keang-Po
2003-11-15
Based on first-order perturbation theory of the soliton, the Gordon-Haus timing jitter induced by amplifier noise is found to be non-Gaussian distributed. Both frequency and timing jitter have larger tail probabilities than Gaussian distribution given by the linearized perturbation theory. The timing jitter has a larger discrepancy from Gaussian distribution than does the frequency jitter.
Scaling and intermittency in incoherent α-shear dynamo
NASA Astrophysics Data System (ADS)
Mitra, Dhrubaditya; Brandenburg, Axel
2012-03-01
We consider mean-field dynamo models with fluctuating α effect, both with and without large-scale shear. The α effect is chosen to be Gaussian white noise with zero mean and a given covariance. In the presence of shear, we show analytically that (in infinitely large domains) the mean-squared magnetic field shows exponential growth. The growth rate of the fastest growing mode is proportional to the shear rate. This result agrees with earlier numerical results of Yousef et al. and the recent analytical treatment by Heinemann, McWilliams & Schekochihin who use a method different from ours. In the absence of shear, an incoherent α2 dynamo may also be possible. We further show by explicit calculation of the growth rate of third- and fourth-order moments of the magnetic field that the probability density function of the mean magnetic field generated by this dynamo is non-Gaussian.
Study on typhoon characteristic based on bridge health monitoring system.
Wang, Xu; Chen, Bin; Sun, Dezhang; Wu, Yinqiang
2014-01-01
Through the wind velocity and direction monitoring system installed on Jiubao Bridge of Qiantang River, Hangzhou city, Zhejiang province, China, a full range of wind velocity and direction data was collected during typhoon HAIKUI in 2012. Based on these data, it was found that, at higher observed elevation, turbulence intensity is lower, and the variation tendency of longitudinal and lateral turbulence intensities with mean wind speeds is basically the same. Gust factor goes higher with increasing mean wind speed, and the change rate obviously decreases as wind speed goes down and an inconspicuous increase occurs when wind speed is high. The change of peak factor is inconspicuous with increasing time and mean wind speed. The probability density function (PDF) of fluctuating wind speed follows Gaussian distribution. Turbulence integral scale increases with mean wind speed, and its PDF does not follow Gaussian distribution. The power spectrum of observation fluctuating velocity is in accordance with Von Karman spectrum.
Kullback-Leibler divergence measure of intermittency: Application to turbulence
NASA Astrophysics Data System (ADS)
Granero-Belinchón, Carlos; Roux, Stéphane G.; Garnier, Nicolas B.
2018-01-01
For generic systems exhibiting power law behaviors, and hence multiscale dependencies, we propose a simple tool to analyze multifractality and intermittency, after noticing that these concepts are directly related to the deformation of a probability density function from Gaussian at large scales to non-Gaussian at smaller scales. Our framework is based on information theory and uses Shannon entropy and Kullback-Leibler divergence. We provide an extensive application to three-dimensional fully developed turbulence, seen here as a paradigmatic complex system where intermittency was historically defined and the concepts of scale invariance and multifractality were extensively studied and benchmarked. We compute our quantity on experimental Eulerian velocity measurements, as well as on synthetic processes and phenomenological models of fluid turbulence. Our approach is very general and does not require any underlying model of the system, although it can probe the relevance of such a model.
Relative performance of selected detectors
NASA Astrophysics Data System (ADS)
Ranney, Kenneth I.; Khatri, Hiralal; Nguyen, Lam H.; Sichina, Jeffrey
2000-08-01
The quadratic polynomial detector (QPD) and the radial basis function (RBF) family of detectors -- including the Bayesian neural network (BNN) -- might well be considered workhorses within the field of automatic target detection (ATD). The QPD works reasonably well when the data is unimodal, and it also achieves the best possible performance if the underlying data follow a Gaussian distribution. The BNN, on the other hand, has been applied successfully in cases where the underlying data are assumed to follow a multimodal distribution. We compare the performance of a BNN detector and a QPD for various scenarios synthesized from a set of Gaussian probability density functions (pdfs). This data synthesis allows us to control parameters such as modality and correlation, which, in turn, enables us to create data sets that can probe the weaknesses of the detectors. We present results for different data scenarios and different detector architectures.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Robust Lee local statistic filter for removal of mixed multiplicative and impulse noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Astola, Jaakko T.
2004-05-01
A robust version of Lee local statistic filter able to effectively suppress the mixed multiplicative and impulse noise in images is proposed. The performance of the proposed modification is studied for a set of test images, several values of multiplicative noise variance, Gaussian and Rayleigh probability density functions of speckle, and different characteris-tics of impulse noise. The advantages of the designed filter in comparison to the conventional Lee local statistic filter and some other filters able to cope with mixed multiplicative+impulse noise are demonstrated.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-02-14
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-01-01
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590
NASA Astrophysics Data System (ADS)
Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.
2018-02-01
We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.
A Probabilistic, Facility-Centric Approach to Lightning Strike Location
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.
2012-01-01
A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Vio, R.; Vergès, C.; Andreani, P.
2017-08-01
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Non-gaussian statistics of pencil beam surveys
NASA Technical Reports Server (NTRS)
Amendola, Luca
1994-01-01
We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.
Modeling turbulent/chemistry interactions using assumed pdf methods
NASA Technical Reports Server (NTRS)
Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.
1992-01-01
Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.
Spectrum sensing based on cumulative power spectral density
NASA Astrophysics Data System (ADS)
Nasser, A.; Mansour, A.; Yao, K. C.; Abdallah, H.; Charara, H.
2017-12-01
This paper presents new spectrum sensing algorithms based on the cumulative power spectral density (CPSD). The proposed detectors examine the CPSD of the received signal to make a decision on the absence/presence of the primary user (PU) signal. Those detectors require the whiteness of the noise in the band of interest. The false alarm and detection probabilities are derived analytically and simulated under Gaussian and Rayleigh fading channels. Our proposed detectors present better performance than the energy (ED) or the cyclostationary detectors (CSD). Moreover, in the presence of noise uncertainty (NU), they are shown to provide more robustness than ED, with less performance loss. In order to neglect the NU, we modified our algorithms to be independent from the noise variance.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
NASA Astrophysics Data System (ADS)
Hassan, M. A. M.; Nour El-Din, M. S. M.; Ellithi, A.; Hosny, H.; Salama, T. N. E.
2017-10-01
In the framework of Glauber optical limit approximation where Coulomb effect is taken into account, the elastic scattering differential cross section for halo nuclei with {}^{12}{C} at 800 MeV/N has been calculated. Its sensitivity to the halo densities and the root mean square of the core and halo is the main goal of the current study. The projectile nuclei are taken to be one-neutron and two-neutron halo. The calculations are carried out for Gaussian-Gaussian, Gaussian-Oscillator and Gaussian-2 s phenomenological densities for each considered projectile in the mass number range 6-29. Also included a comparison between the obtained results of phenomenological densities and the results within the microscopic densities LSSM of {}6{He} and {}^{11}{Li} and microscopic densities GCM of {}^{11}{Be} where the density of the target nucleus {}^{12}{C} obtained from electron-{}^{12}{C} scattering is used. The zero range approximation is considered in the calculations. We found that the sensitivity of elastic scattering differential cross section to the halo density is clear if the nucleus appears as two clear different clusters, core and halo.
Cosine-Gaussian Schell-model sources.
Mei, Zhangrong; Korotkova, Olga
2013-07-15
We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
NASA Astrophysics Data System (ADS)
Karakatsanis, L. P.; Iliopoulos, A. C.; Pavlos, E. G.; Pavlos, G. P.
2018-02-01
In this paper, we perform statistical analysis of time series deriving from Earth's climate. The time series are concerned with Geopotential Height (GH) and correspond to temporal and spatial components of the global distribution of month average values, during the period (1948-2012). The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis' q-triplet, namely {qstat, qsens, qrel}, the reconstructed phase space and the estimation of correlation dimension and the Hurst exponent of rescaled range analysis (R/S). The deviation of Tsallis q-triplet from unity indicates non-Gaussian (Tsallis q-Gaussian) non-extensive character with heavy tails probability density functions (PDFs), multifractal behavior and long range dependences for all timeseries considered. Also noticeable differences of the q-triplet estimation found in the timeseries at distinct local or temporal regions. Moreover, in the reconstructive phase space revealed a lower-dimensional fractal set in the GH dynamical phase space (strong self-organization) and the estimation of Hurst exponent indicated multifractality, non-Gaussianity and persistence. The analysis is giving significant information identifying and characterizing the dynamical characteristics of the earth's climate.
Exploring the propagation of relativistic quantum wavepackets in the trajectory-based formulation
NASA Astrophysics Data System (ADS)
Tsai, Hung-Ming; Poirier, Bill
2016-03-01
In the context of nonrelativistic quantum mechanics, Gaussian wavepacket solutions of the time-dependent Schrödinger equation provide useful physical insight. This is not the case for relativistic quantum mechanics, however, for which both the Klein-Gordon and Dirac wave equations result in strange and counterintuitive wavepacket behaviors, even for free-particle Gaussians. These behaviors include zitterbewegung and other interference effects. As a potential remedy, this paper explores a new trajectory-based formulation of quantum mechanics, in which the wavefunction plays no role [Phys. Rev. X, 4, 040002 (2014)]. Quantum states are represented as ensembles of trajectories, whose mutual interaction is the source of all quantum effects observed in nature—suggesting a “many interacting worlds” interpretation. It is shown that the relativistic generalization of the trajectory-based formulation results in well-behaved free-particle Gaussian wavepacket solutions. In particular, probability density is positive and well-localized everywhere, and its spatial integral is conserved over time—in any inertial frame. Finally, the ensemble-averaged wavepacket motion is along a straight line path through spacetime. In this manner, the pathologies of the wave-based relativistic quantum theory, as applied to wavepacket propagation, are avoided.
NASA Astrophysics Data System (ADS)
Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf
2018-04-01
A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.
2016-02-01
In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli
2017-10-07
The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals
NASA Astrophysics Data System (ADS)
Buchert, Thomas; France, Martin J.; Steiner, Frank
2017-05-01
Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k = P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Topology of large-scale structure in seeded hot dark matter models
NASA Technical Reports Server (NTRS)
Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.
1992-01-01
The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences
NASA Astrophysics Data System (ADS)
Aurzada, Frank; Buck, Micha
2018-02-01
We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.
Non-local bias in the halo bispectrum with primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tellarini, Matteo; Ross, Ashley J.; Wands, David
2015-07-01
Primordial non-Gaussianity can lead to a scale-dependent bias in the density of collapsed halos relative to the underlying matter density. The galaxy power spectrum already provides constraints on local-type primordial non-Gaussianity complementary those from the cosmic microwave background (CMB), while the bispectrum contains additional shape information and has the potential to outperform CMB constraints in future. We develop the bias model for the halo density contrast in the presence of local-type primordial non-Gaussianity, deriving a bivariate expansion up to second order in terms of the local linear matter density contrast and the local gravitational potential in Lagrangian coordinates. Nonlinear evolutionmore » of the matter density introduces a non-local tidal term in the halo model. Furthermore, the presence of local-type non-Gaussianity in the Lagrangian frame leads to a novel non-local convective term in the Eulerian frame, that is proportional to the displacement field when going beyond the spherical collapse approximation. We use an extended Press-Schechter approach to evaluate the halo mass function and thus the halo bispectrum. We show that including these non-local terms in the halo bispectra can lead to corrections of up to 25% for some configurations, on large scales or at high redshift.« less
Bayesian ionospheric multi-instrument 3D tomography
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Vierinen, Juha; Roininen, Lassi
2017-04-01
The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.
Global tracking of space debris via CPHD and consensus
NASA Astrophysics Data System (ADS)
Wei, Baishen; Nener, Brett; Liu, Weifeng; Ma, Liang
2017-05-01
Space debris tracking is of great importance for safe operation of spacecraft. This paper presents an algorithm that achieves global tracking of space debris with a multi-sensor network. The sensor network has unknown and possibly time-varying topology. A consensus algorithm is used to effectively counteract the effects of data incest. Gaussian Mixture-Cardinalized Probability Hypothesis Density (GM-CPHD) filtering is used to estimate the state of the space debris. As an example of the method, 45 clusters of sensors are used to achieve global tracking. The performance of the proposed approach is demonstrated by simulation experiments.
A Low Fidelity Simulation To Examine The Design Space For An Expendable Active Decoy
2017-12-01
Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N ...discussed in Aircraft Model Section III.E), and read the associated integration improvement factor off the left side of Figure 10. b p b s f n ...V0 is the root mean squared noise level. 2 2 0 2 2 0 V V N V p V e V (18) The probability density function for Gaussian noise is
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A
NASA Astrophysics Data System (ADS)
Turban, Loïc; Fortin, Jean-Yves
2018-04-01
Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.
Statistical properties of Charney-Hasegawa-Mima zonal flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.
2015-05-15
A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less
NASA Astrophysics Data System (ADS)
Zhou, GuoQuan; Cai, YangJian; Dai, ChaoQing
2013-05-01
A kind of hollow vortex Gaussian beam is introduced. Based on the Collins integral, an analytical propagation formula of a hollow vortex Gaussian beam through a paraxial ABCD optical system is derived. Due to the special distribution of the optical field, which is caused by the initial vortex phase, the dark region of a hollow vortex Gaussian beam will not disappear upon propagation. The analytical expressions for the beam propagation factor, the kurtosis parameter, and the orbital angular momentum density of a hollow vortex Gaussian beam passing through a paraxial ABCD optical system are also derived, respectively. The beam propagation factor is determined by the beam order and the topological charge. The kurtosis parameter and the orbital angular momentum density depend on beam order n, topological charge m, parameter γ, and transfer matrix elements A and D. As a numerical example, the propagation properties of a hollow vortex Gaussian beam in free space are demonstrated. The hollow vortex Gaussian beam has eminent propagation stability and has crucial application prospects in optical micromanipulation.
The area of isodensity contours in cosmological models and galaxy surveys
NASA Technical Reports Server (NTRS)
Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
NASA Astrophysics Data System (ADS)
Kumar, Santosh
2015-11-01
We provide a proof to a recent conjecture by Forrester (2014 J. Phys. A: Math. Theor. 47 065202) regarding the algebraic and arithmetic structure of Meijer G-functions which appear in the expression for probability of all eigenvalues real for the product of two real Gaussian matrices. In the process we come across several interesting identities involving Meijer G-functions.
Anomalous and non-Gaussian diffusion in Hertzian spheres
NASA Astrophysics Data System (ADS)
Ouyang, Wenze; Sun, Bin; Sun, Zhiwei; Xu, Shenghua
2018-09-01
By means of molecular dynamics simulations, we study the non-Gaussian diffusion in the fluid of Hertzian spheres. The time dependent non-Gaussian parameter, as an indicator of the dynamic heterogeneity, is increased with the increasing of temperature. When the temperature is high enough, the dynamic heterogeneity becomes very significant, and it seems counterintuitive that the maximum of non-Gaussian parameter and the position of its peak decrease monotonically with the increasing of density. By fitting the curves of self intermediate scattering function, we find that the character relaxation time τα is surprisingly not coupled with the time τmax where the non-Gaussian parameter reaches to a maximum. The intriguing features of non-Gaussian diffusion at high enough temperatures can be associated with the weakly correlated mean-field behavior of Hertzian spheres. Especially the time τmax is nearly inversely proportional to the density at extremely high temperatures.
NASA Astrophysics Data System (ADS)
Aygun, M.; Kucuk, Y.; Boztosun, I.; Ibraheem, Awad A.
2010-12-01
The elastic scattering angular distributions of 6He projectile on different medium and heavy mass target nuclei including 12C, 27Al, 58Ni, 64Zn, 65Cu, 197Au, 208Pb and 209Bi have been examined by using the few-body and Gaussian-shaped density distributions at various energies. The microscopic real parts of the complex nuclear optical potential have been obtained by using the double-folding model for each of the density distributions and the phenomenological imaginary potentials have been taken as the Woods-Saxon type. Comparative results of the few-body and Gaussian-shaped density distributions together with the experimental data are presented within the framework of the optical model.
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Goerg, Georg M.
2015-01-01
I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Stochastic bifurcation in a model of love with colored noise
NASA Astrophysics Data System (ADS)
Yue, Xiaokui; Dai, Honghua; Yuan, Jianping
2015-07-01
In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.
NASA Astrophysics Data System (ADS)
Ogunsua, B. O.; Laoye, J. A.
2018-05-01
In this paper, the Tsallis non-extensive q-statistics in ionospheric dynamics was investigated using the total electron content (TEC) obtained from two Global Positioning System (GPS) receiver stations. This investigation was carried out considering the geomagnetically quiet and storm periods. The micro density variation of the ionospheric total electron content was extracted from the TEC data by method of detrending. The detrended total electron content, which represent the variation in the internal dynamics of the system was further analyzed using for non-extensive statistical mechanics using the q-Gaussian methods. Our results reveals that for all the analyzed data sets the Tsallis Gaussian probability distribution (q-Gaussian) with value q > 1 were obtained. It was observed that there is no distinct difference in pattern between the values of qquiet and qstorm. However the values of q varies with geophysical conditions and possibly with local dynamics for the two stations. Also observed are the asymmetric pattern of the q-Gaussian and a highly significant level of correlation for the q-index values obtained for the storm periods compared to the quiet periods between the two GPS receiver stations where the TEC was measured. The factors responsible for this variation can be mostly attributed to the varying mechanisms resulting in the self-reorganization of the system dynamics during the storm periods. The result shows the existence of long range correlation for both quiet and storm periods for the two stations.
NASA Astrophysics Data System (ADS)
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
On the streaming model for redshift-space distortions
NASA Astrophysics Data System (ADS)
Kuruvilla, Joseph; Porciani, Cristiano
2018-06-01
The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.
Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J
2018-05-29
Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Non-Gaussian and Multivariate Noise Models for Signal Detection.
1982-09-01
follow, some of the basic results of asymptotic "theory are presented. both to make the notation clear. and to give some i ~ background for the...densities are considered within a detection framework. The discussions include specific examples and also some general methods of density generation ...densities generated by a memoryless, nonlinear transformation of a correlated, Gaussian source is discussed in some detail. A member of this class has the
Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities
NASA Astrophysics Data System (ADS)
Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.
A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).
Online Reinforcement Learning Using a Probability Density Estimation.
Agostini, Alejandro; Celaya, Enric
2017-01-01
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
The Self-Organization of a Spoken Word
Holden, John G.; Rajaraman, Srinivasan
2012-01-01
Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213
Digital image analysis of a turbulent flame
NASA Astrophysics Data System (ADS)
Zucherman, L.; Kawall, J. G.; Keffer, J. F.
1988-01-01
Digital image analysis of cine pictures of an unconfined rich premixed turbulent flame has been used to determine structural characteristics of the turbulent/non-turbulent interface of the flame. The results, comprising various moments of the interface position, probability density functions and correlation functions, establish that the instantaneous flame-interface position is essentially a Gaussian random variable with a superimposed quasi-periodical component. The latter is ascribable to a pulsation caused by the convection and the stretching of ring vortices present within the flame. To a first approximation, the flame can be considered similar to a three-dimensional axisymmetric turbulent jet, with superimposed ring vortices, in which combustion occurs.
NASA Astrophysics Data System (ADS)
Yeung, Chuck
2018-06-01
The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.
fNL‑gNL mixing in the matter density field at higher orders
NASA Astrophysics Data System (ADS)
Gressel, Hedda A.; Bruni, Marco
2018-06-01
In this paper we examine how primordial non-Gaussianity contributes to nonlinear perturbative orders in the expansion of the density field at large scales in the matter dominated era. General Relativity is an intrinsically nonlinear theory, establishing a nonlinear relation between the metric and the density field. Representing the metric perturbations with the curvature perturbation ζ, it is known that nonlinearity produces effective non-Gaussian terms in the nonlinear perturbations of the matter density field δ, even if the primordial ζ is Gaussian. Here we generalise these results to the case of a non-Gaussian primordial ζ. Using a standard parametrization of primordial non-Gaussianity in ζ in terms of fNL, gNL, hNL\\ldots , we show how at higher order (from third and higher) nonlinearity also produces a mixing of these contributions to the density field at large scales, e.g. both fNL and gNL contribute to the third order in δ. This is the main result of this paper. Our analysis is based on the synergy between a gradient expansion (aka long-wavelength approximation) and standard perturbation theory at higher order. In essence, mathematically the equations for the gradient expansion are equivalent to those of first order perturbation theory, thus first-order results convert into gradient expansion results and, vice versa, the gradient expansion can be used to derive results in perturbation theory at higher order and large scales.
The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise
NASA Astrophysics Data System (ADS)
Guo, Qin; Sun, Zhongkui; Xu, Wei
2016-05-01
The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.
Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts
NASA Astrophysics Data System (ADS)
Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid
2016-08-01
This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.
Optimization of spectroscopic surveys for testing non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Dalal, Neal, E-mail: alvise@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: dalaln@illinois.edu
We investigate optimization strategies to measure primordial non-Gaussianity with future spectroscopic surveys. We forecast measurements coming from the 3D galaxy power spectrum and compute constraints on primordial non-Gaussianity parameters f{sub NL} and n{sub NG}. After studying the dependence on those parameters upon survey specifications such as redshift range, area, number density, we assume a reference mock survey and investigate the trade-off between number density and area surveyed. We then define the observational requirements to reach the detection of f{sub NL} of order 1. Our results show that power spectrum constraints on non-Gaussianity from future spectroscopic surveys can improve on currentmore » CMB limits, but the multi-tracer technique and higher order correlations will be needed in order to reach an even better precision in the measurements of the non-Gaussianity parameter f{sub NL}.« less
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Semisupervised Gaussian Process for Automated Enzyme Search.
Mellor, Joseph; Grigoras, Ioana; Carbonell, Pablo; Faulon, Jean-Loup
2016-06-17
Synthetic biology is today harnessing the design of novel and greener biosynthesis routes for the production of added-value chemicals and natural products. The design of novel pathways often requires a detailed selection of enzyme sequences to import into the chassis at each of the reaction steps. To address such design requirements in an automated way, we present here a tool for exploring the space of enzymatic reactions. Given a reaction and an enzyme the tool provides a probability estimate that the enzyme catalyzes the reaction. Our tool first considers the similarity of a reaction to known biochemical reactions with respect to signatures around their reaction centers. Signatures are defined based on chemical transformation rules by using extended connectivity fingerprint descriptors. A semisupervised Gaussian process model associated with the similar known reactions then provides the probability estimate. The Gaussian process model uses information about both the reaction and the enzyme in providing the estimate. These estimates were validated experimentally by the application of the Gaussian process model to a newly identified metabolite in Escherichia coli in order to search for the enzymes catalyzing its associated reactions. Furthermore, we show with several pathway design examples how such ability to assign probability estimates to enzymatic reactions provides the potential to assist in bioengineering applications, providing experimental validation to our proposed approach. To the best of our knowledge, the proposed approach is the first application of Gaussian processes dealing with biological sequences and chemicals, the use of a semisupervised Gaussian process framework is also novel in the context of machine learning applied to bioinformatics. However, the ability of an enzyme to catalyze a reaction depends on the affinity between the substrates of the reaction and the enzyme. This affinity is generally quantified by the Michaelis constant KM. Therefore, we also demonstrate using Gaussian process regression to predict KM given a substrate-enzyme pair.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Stochastic analysis of a pulse-type prey-predator model
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Anomalous transport in fluid field with random waiting time depending on the preceding jump length
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Guo-Hua
2016-11-01
Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).
Stochastic analysis of a pulse-type prey-predator model.
Wu, Y; Zhu, W Q
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Examples of measurement uncertainty evaluations in accordance with the revised GUM
NASA Astrophysics Data System (ADS)
Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.
2016-11-01
The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
The structure and statistics of interstellar turbulence
NASA Astrophysics Data System (ADS)
Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.
2017-06-01
We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ray, Laura R.
1991-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold`` and ``hot`` non-baryonic candidates is shown to depend on the assumed ``seeds`` that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
NASA Astrophysics Data System (ADS)
Schramm, David N.
1992-07-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold'' and ``hot'' non-baryonic candidates is shown to depend on the assumed ``seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
NASA Astrophysics Data System (ADS)
Schramm, D. N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the omega = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between 'cold' and 'hot' non-baryonic candidates is shown to depend on the assumed 'seeds' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages, and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Plasma oscillations in spherical Gaussian shaped ultracold neutral plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tianxing; Lu, Ronghua, E-mail: lurh@siom.ac.cn; Guo, Li
2016-04-15
The collective plasma oscillations are investigated in ultracold neutral plasma with a non-uniform density profile. Instead of the plane configuration widely used, we derive the plasma oscillation equations with spherically symmetric distribution and Gaussian density profile. The damping of radial oscillation is found. The Tonks–Dattner resonances of the ultracold neutral plasma with an applied RF field are also calculated.
Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai
To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.
Scanziani, Alessio; Singh, Kamaljit; Blunt, Martin J; Guadagnini, Alberto
2017-06-15
Multiphase flow in porous media is strongly influenced by the wettability of the system, which affects the arrangement of the interfaces of different phases residing in the pores. We present a method for estimating the effective contact angle, which quantifies the wettability and controls the local capillary pressure within the complex pore space of natural rock samples, based on the physical constraint of constant curvature of the interface between two fluids. This algorithm is able to extract a large number of measurements from a single rock core, resulting in a characteristic distribution of effective in situ contact angle for the system, that is modelled as a truncated Gaussian probability density distribution. The method is first validated on synthetic images, where the exact angle is known analytically; then the results obtained from measurements within the pore space of rock samples imaged at a resolution of a few microns are compared to direct manual assessment. Finally the method is applied to X-ray micro computed tomography (micro-CT) scans of two Ketton cores after waterflooding, that display water-wet and mixed-wet behaviour. The resulting distribution of in situ contact angles is characterized in terms of a mixture of truncated Gaussian densities. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
Divergence of perturbation theory in large scale structures
NASA Astrophysics Data System (ADS)
Pajer, Enrico; van der Woude, Drian
2018-05-01
We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.
Probabilistic density function method for nonlinear dynamical systems driven by colored noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time.more » We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.« less
Xiao, Yanwen; Xu, Wei; Wang, Liang
2016-03-01
This paper focuses on the study of the stochastic Van der Pol vibro-impact system with fractional derivative damping under Gaussian white noise excitation. The equations of the original system are simplified by non-smooth transformation. For the simplified equation, the stochastic averaging approach is applied to solve it. Then, the fractional derivative damping term is facilitated by a numerical scheme, therewith the fourth-order Runge-Kutta method is used to obtain the numerical results. And the numerical simulation results fit the analytical solutions. Therefore, the proposed analytical means to study this system are proved to be feasible. In this context, the effects on the response stationary probability density functions (PDFs) caused by noise excitation, restitution condition, and fractional derivative damping are considered, in addition the stochastic P-bifurcation is also explored in this paper through varying the value of the coefficient of fractional derivative damping and the restitution coefficient. These system parameters not only influence the response PDFs of this system but also can cause the stochastic P-bifurcation.
Yi, Qu; Zhan-ming, Li; Er-chao, Li
2012-11-01
A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Yanwen; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Wang, Liang
2016-03-15
This paper focuses on the study of the stochastic Van der Pol vibro-impact system with fractional derivative damping under Gaussian white noise excitation. The equations of the original system are simplified by non-smooth transformation. For the simplified equation, the stochastic averaging approach is applied to solve it. Then, the fractional derivative damping term is facilitated by a numerical scheme, therewith the fourth-order Runge-Kutta method is used to obtain the numerical results. And the numerical simulation results fit the analytical solutions. Therefore, the proposed analytical means to study this system are proved to be feasible. In this context, the effects onmore » the response stationary probability density functions (PDFs) caused by noise excitation, restitution condition, and fractional derivative damping are considered, in addition the stochastic P-bifurcation is also explored in this paper through varying the value of the coefficient of fractional derivative damping and the restitution coefficient. These system parameters not only influence the response PDFs of this system but also can cause the stochastic P-bifurcation.« less
Balsillie, J.H.; Donoghue, J.F.; Butler, K.M.; Koch, J.L.
2002-01-01
Two-dimensional plotting tools can be of invaluable assistance in analytical scientific pursuits, and have been widely used in the analysis and interpretation of sedimentologic data. We consider, in this work, the use of arithmetic probability paper (APP). Most statistical computer applications do not allow for the generation of APP plots, because of apparent intractable nonlinearity of the percentile (or probability) axis of the plot. We have solved this problem by identifying an equation(s) for determining plotting positions of Gaussian percentiles (or probabilities), so that APP plots can easily be computer generated. An EXCEL example is presented, and a programmed, simple-to-use EXCEL application template is hereby made publicly available, whereby a complete granulometric analysis including data listing, moment measure calculations, and frequency and cumulative APP plots, is automatically produced.
Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph
In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less
Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference.
Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing
2016-01-01
In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications.
Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference
Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing
2016-01-01
In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications. PMID:27851817
Gaussification and entanglement distillation of continuous-variable systems: a unifying picture.
Campbell, Earl T; Eisert, Jens
2012-01-13
Distillation of entanglement using only Gaussian operations is an important primitive in quantum communication, quantum repeater architectures, and distributed quantum computing. Existing distillation protocols for continuous degrees of freedom are only known to converge to a Gaussian state when measurements yield precisely the vacuum outcome. In sharp contrast, non-Gaussian states can be deterministically converted into Gaussian states while preserving their second moments, albeit by usually reducing their degree of entanglement. In this work-based on a novel instance of a noncommutative central limit theorem-we introduce a picture general enough to encompass the known protocols leading to Gaussian states, and new classes of protocols including multipartite distillation. This gives the experimental option of balancing the merits of success probability against entanglement produced.
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Statistics of multi-look AIRSAR imagery: A comparison of theory with measurements
NASA Technical Reports Server (NTRS)
Lee, J. S.; Hoppel, K. W.; Mango, S. A.
1993-01-01
The intensity and amplitude statistics of SAR images, such as L-Band HH for SEASAT and SIR-B, and C-Band VV for ERS-1 have been extensively investigated for various terrain, ground cover and ocean surfaces. Less well-known are the statistics between multiple channels of polarimetric of interferometric SAR's, especially for the multi-look processed data. In this paper, we investigate the probability density functions (PDF's) of phase differences, the magnitude of complex products and the amplitude ratios, between polarization channels (i.e. HH, HV, and VV) using 1-look and 4-look AIRSAR polarimetric data. Measured histograms are compared with theoretical PDF's which were recently derived based on a complex Gaussian model.
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
NASA Astrophysics Data System (ADS)
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
The effect of halo nuclear density on reaction cross-section for light ion collision
NASA Astrophysics Data System (ADS)
Hassan, M. A. M.; Nour El-Din, M. S. M.; Ellithi, A.; Ismail, E.; Hosny, H.
2015-08-01
In the framework of the optical limit approximation (OLA), the reaction cross-section for halo nucleus — stable nucleus collision at intermediate energy, has been studied. The projectile nuclei are taken to be one-neutron halo (1NHP) and two-neutron halo (2NHP). The calculations are carried out for Gaussian-Gaussian (GG), Gaussian-Oscillator (GO), and Gaussian-2S (G2S) densities for each considered projectile. As a target, the stable nuclei in the range 4-28 of the mass number are used. An analytic expression of the phase shift function has been derived. The zero range approximation is considered in the calculations. Also, the in-medium effect is studied. The obtained results are analyzed and compared with the geometrical reaction cross-section and the available experimental data.
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Čársky, Petr; Čurík, Roman; Varga, Štefan
2012-03-21
The objective of this paper is to show that the density fitting (resolution of the identity approximation) can also be applied to Coulomb integrals of the type (k(1)(1)k(2)(1)|g(1)(2)g(2)(2)), where k and g symbols refer to plane-wave functions and gaussians, respectively. We have shown how to achieve the accuracy of these integrals that is needed in wave-function MO and density functional theory-type calculations using mixed Gaussian and plane-wave basis sets. The crucial issues for achieving such a high accuracy are application of constraints for conservation of the number electrons and components of the dipole moment, optimization of the auxiliary basis set, and elimination of round-off errors in the matrix inversion. © 2012 American Institute of Physics
The Nosé–Hoover looped chain thermostat for low temperature thawed Gaussian wave-packet dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coughtrie, David J.; Tew, David P.
2014-05-21
We have used a generalised coherent state resolution of the identity to map the quantum canonical statistical average for a general system onto a phase-space average over the centre and width parameters of a thawed Gaussian wave packet. We also propose an artificial phase-space density that has the same behaviour as the canonical phase-space density in the low-temperature limit, and have constructed a novel Nosé–Hoover looped chain thermostat that generates this density in conjunction with variational thawed Gaussian wave-packet dynamics. This forms a new platform for evaluating statistical properties of quantum condensed-phase systems that has an explicit connection to themore » time-dependent Schrödinger equation, whilst retaining many of the appealing features of path-integral molecular dynamics.« less
Gaussian model for emission rate measurement of heated plumes using hyperspectral data
NASA Astrophysics Data System (ADS)
Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.
2018-02-01
This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
Rapidity window dependences of higher order cumulants and diffusion master equation
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo
2015-10-01
We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
The statistics of primordial density fluctuations
NASA Astrophysics Data System (ADS)
Barrow, John D.; Coles, Peter
1990-05-01
The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.
Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis
2013-06-01
We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
Chaudret, Robin; Gresh, Nohad; Narth, Christophe; Lagardère, Louis; Darden, Thomas A; Cisneros, G Andrés; Piquemal, Jean-Philip
2014-09-04
We demonstrate as a proof of principle the capabilities of a novel hybrid MM'/MM polarizable force field to integrate short-range quantum effects in molecular mechanics (MM) through the use of Gaussian electrostatics. This lead to a further gain in accuracy in the representation of the first coordination shell of metal ions. It uses advanced electrostatics and couples two point dipole polarizable force fields, namely, the Gaussian electrostatic model (GEM), a model based on density fitting, which uses fitted electronic densities to evaluate nonbonded interactions, and SIBFA (sum of interactions between fragments ab initio computed), which resorts to distributed multipoles. To understand the benefits of the use of Gaussian electrostatics, we evaluate first the accuracy of GEM, which is a pure density-based Gaussian electrostatics model on a test Ca(II)-H2O complex. GEM is shown to further improve the agreement of MM polarization with ab initio reference results. Indeed, GEM introduces nonclassical effects by modeling the short-range quantum behavior of electric fields and therefore enables a straightforward (and selective) inclusion of the sole overlap-dependent exchange-polarization repulsive contribution by means of a Gaussian damping function acting on the GEM fields. The S/G-1 scheme is then introduced. Upon limiting the use of Gaussian electrostatics to metal centers only, it is shown to be able to capture the dominant quantum effects at play on the metal coordination sphere. S/G-1 is able to accurately reproduce ab initio total interaction energies within closed-shell metal complexes regarding each individual contribution including the separate contributions of induction, polarization, and charge-transfer. Applications of the method are provided for various systems including the HIV-1 NCp7-Zn(II) metalloprotein. S/G-1 is then extended to heavy metal complexes. Tested on Hg(II) water complexes, S/G-1 is shown to accurately model polarization up to quadrupolar response level. This opens up the possibility of embodying explicit scalar relativistic effects in molecular mechanics thanks to the direct transferability of ab initio pseudopotentials. Therefore, incorporating GEM-like electron density for a metal cation enable the introduction of nonambiguous short-range quantum effects within any point-dipole based polarizable force field without the need of an extensive parametrization.
Recovering dark-matter clustering from galaxies with Gaussianization
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Neyrinck, Mark; Norberg, Peder; Cole, Shaun
2016-04-01
The Gaussianization transform has been proposed as a method to remove the issues of scale-dependent galaxy bias and non-linearity from galaxy clustering statistics, but these benefits have yet to be thoroughly tested for realistic galaxy samples. In this paper, we test the effectiveness of the Gaussianization transform for different galaxy types by applying it to realistic simulated blue and red galaxy samples. We show that in real space, the shapes of the Gaussianized power spectra of both red and blue galaxies agree with that of the underlying dark matter, with the initial power spectrum, and with each other to smaller scales than do the statistics of the usual (untransformed) density field. However, we find that the agreement in the Gaussianized statistics breaks down in redshift space. We attribute this to the fact that red and blue galaxies exhibit very different fingers of god in redshift space. After applying a finger-of-god compression, the agreement on small scales between the Gaussianized power spectra is restored. We also compare the Gaussianization transform to the clipped galaxy density field and find that while both methods are effective in real space, they have more complicated behaviour in redshift space. Overall, we find that Gaussianization can be useful in recovering the shape of the underlying dark-matter power spectrum to k ˜ 0.5 h Mpc-1 and of the initial power spectrum to k ˜ 0.4 h Mpc-1 in certain cases at z = 0.
Geometrical Description of fractional quantum Hall quasiparticles
NASA Astrophysics Data System (ADS)
Park, Yeje; Yang, Bo; Haldane, F. D. M.
2012-02-01
We examine a description of fractional quantum Hall quasiparticles and quasiholes suggested by a recent geometrical approach (F. D. M. Haldane, Phys. Rev. Lett. 108, 116801 (2011)) to FQH systems, where the local excess electric charge density in the incompressible state is given by a topologically-quantized ``guiding-center spin'' times the Gaussian curvature of a ``guiding-center metric tensor'' that characterizes the local shape of the correlation hole around electrons in the fluid. We use a phenomenological energy function with two ingredients: the shear distortion energy of area-preserving distortions of the fluid, and a local (short-range) approximation to the Coulomb energy of the fluctuation of charge density associated with the Gaussian curvature. Quasiparticles and quasiholes of the 1/3 Laughlin state are modeled as ``punctures'' in the incompressible fluid which then relax by geometric distortion which generates Gaussian curvature, giving rise to the charge-density profile around the topological excitation.
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering
NASA Astrophysics Data System (ADS)
Milder, A. L.; Froula, D. H.
2017-10-01
A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2
Quasineutral plasma expansion into infinite vacuum as a model for parallel ELM transport
NASA Astrophysics Data System (ADS)
Moulton, D.; Ghendrih, Ph; Fundamenski, W.; Manfredi, G.; Tskhakaya, D.
2013-08-01
An analytic solution for the expansion of a plasma into vacuum is assessed for its relevance to the parallel transport of edge localized mode (ELM) filaments along field lines. This solution solves the 1D1V Vlasov-Poisson equations for the adiabatic (instantaneous source), collisionless expansion of a Gaussian plasma bunch into an infinite space in the quasineutral limit. The quasineutral assumption is found to hold as long as λD0/σ0 ≲ 0.01 (where λD0 is the initial Debye length at peak density and σ0 is the parallel length of the Gaussian filament), a condition that is physically realistic. The inclusion of a boundary at x = L and consequent formation of a target sheath is found to have a negligible effect when L/σ0 ≳ 5, a condition that is physically plausible. Under the same condition, the target flux densities predicted by the analytic solution are well approximated by the ‘free-streaming’ equations used in previous experimental studies, strengthening the notion that these simple equations are physically reasonable. Importantly, the analytic solution predicts a zero heat flux density so that a fluid approach to the problem can be used equally well, at least when the source is instantaneous. It is found that, even for JET-like pedestal parameters, collisions can affect the expansion dynamics via electron temperature isotropization, although this is probably a secondary effect. Finally, the effect of a finite duration, τsrc, for the plasma source is investigated. As is found for an instantaneous source, when L/σ0 ≳ 5 the presence of a target sheath has a negligible effect, at least up to the explored range of τsrc = L/cs (where cs is the sound speed at the initial temperature).
Empirical prediction intervals improve energy forecasting
Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick
2017-01-01
Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997
Work statistics of charged noninteracting fermions in slowly changing magnetic fields.
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society
Work statistics of charged noninteracting fermions in slowly changing magnetic fields
NASA Astrophysics Data System (ADS)
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.
NASA Astrophysics Data System (ADS)
Rezaei Kh., S.; Bailer-Jones, C. A. L.; Hanson, R. J.; Fouesneau, M.
2017-02-01
We present a non-parametric model for inferring the three-dimensional (3D) distribution of dust density in the Milky Way. Our approach uses the extinction measured towards stars at different locations in the Galaxy at approximately known distances. Each extinction measurement is proportional to the integrated dust density along its line of sight (LoS). Making simple assumptions about the spatial correlation of the dust density, we can infer the most probable 3D distribution of dust across the entire observed region, including along sight lines which were not observed. This is possible because our model employs a Gaussian process to connect all LoS. We demonstrate the capability of our model to capture detailed dust density variations using mock data and simulated data from the Gaia Universe Model Snapshot. We then apply our method to a sample of giant stars observed by APOGEE and Kepler to construct a 3D dust map over a small region of the Galaxy. Owing to our smoothness constraint and its isotropy, we provide one of the first maps which does not show the "fingers of God" effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Withers, L. P., E-mail: lpwithers@mitre.org; Narducci, F. A., E-mail: francesco.narducci@navy.mil
2015-06-15
The recent single-photon double-slit experiment of Steinberg et al., based on a weak measurement method proposed by Wiseman, showed that, by encoding the photon’s transverse momentum behind the slits into its polarization state, the momentum profile can subsequently be measured on average, from a difference of the separated fringe intensities for the two circular polarization components. They then integrated the measured average velocity field, to obtain the average trajectories of the photons enroute to the detector array. In this paper, we propose a modification of their experiment, to demonstrate that the average particle velocities and trajectories change when the modemore » of detection changes. The proposed experiment replaces a single detector by a pair of detectors with a given spacing between them. The pair of detectors is configured so that it is impossible to distinguish which detector received the particle. The pair of detectors is then analogous to the simple pair of slits, in that it is impossible to distinguish which slit the particle passed through. To establish the paradoxical outcome of the modified experiment, the theory and explicit three-dimensional formulas are developed for the bilocal probability and current densities, and for the average velocity field and trajectories as the particle wavefunction propagates in the volume of space behind the Gaussian slits. Examples of these predicted results are plotted. Implementation details of the proposed experiment are discussed.« less
Embedding the photon with its relativistic mass as a particle into the electromagnetic wave.
Altmann, Konrad
2018-01-22
The particle picture presented by the author in the paper "A particle picture of the optical resonator" [K. Altmann, ASSL 2014 Conference Paper ATu2A.29], which shows that the probability density of a photon propagating with a Gaussian wave can be computed by the use of a Schrödinger equation, is generalized to the case of a wave with arbitrary shape of the phase front. Based on a consideration of the changing propagation direction of the relativistic mass density propagating with the electromagnetic wave, a transverse force acting on the photon is derived. The expression obtained for this force makes it possible to show that the photon moves within a transverse potential that in combination with a Schrödinger equation allows to describe the transverse quantum mechanical motion of the photon by the use of matter wave theory, even though the photon has no rest mass. The obtained results are verified for the plane, the spherical, and the Gaussian wave. Additional verification could be provided also by the fact that the mathematical equation describing the Guoy phase shift could be derived from this particle picture in full agreement with wave optics. One more verification could be obtained by the fact that within the range of the validity of paraxial wave optics, Snell's law could also be derived from this particle picture. Numerical validation of the obtained results for the case of the general wave is under development.
On the Five-Moment Hamburger Maximum Entropy Reconstruction
NASA Astrophysics Data System (ADS)
Summy, D. P.; Pullin, D. I.
2018-05-01
We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.
A Gaussian measure of quantum phase noise
NASA Technical Reports Server (NTRS)
Schleich, Wolfgang P.; Dowling, Jonathan P.
1992-01-01
We study the width of the semiclassical phase distribution of a quantum state in its dependence on the average number of photons (m) in this state. As a measure of phase noise, we choose the width, delta phi, of the best Gaussian approximation to the dominant peak of this probability curve. For a coherent state, this width decreases with the square root of (m), whereas for a truncated phase state it decreases linearly with increasing (m). For an optimal phase state, delta phi decreases exponentially but so does the area caught underneath the peak: all the probability is stored in the broad wings of the distribution.
Simulations of the gyroid phase in diblock copolymers with the Gaussian disphere model
NASA Astrophysics Data System (ADS)
Karatchentsev, A.; Sommer, J.-U.
2010-12-01
Pure melts of asymmetric diblock copolymers are studied by means of the off-lattice Gaussian disphere model with Monte-Carlo kinetics. In this model, a diblock copolymer chain is mapped onto two soft repulsive spheres with fluctuating radii of gyration and distance between centers of mass of the spheres. Microscopic input quantities of the model such as the combined probability distribution for the radii of gyration and the distance between the spheres as well as conditional monomer number densities assigned to each block were derived in the previous work of F. Eurich and P. Maass [J. Chem. Phys. 114, 7655 (2001)] within an underlying Gaussian chain model. The polymerization degree of the whole chain as well as those of the individual blocks are freely tunable parameters thus enabling a precise determination of the regions of stability of various phases. The model neglects entanglement effects which are irrelevant for the formation of ordered structures in diblock copolymers and which would otherwise unnecessarily increase the equilibration time of the system. The gyroid phase was reproduced in between the cylindrical and lamellar phases in systems with box sizes being commensurate with the size of the unit cell of the gyroid morphology. The region of stability of the gyroid phase was studied in detail and found to be consistent with the prediction of the mean-field theory. Packing frustration was observed in the form of increased radii of gyration of both blocks of the chains located close to the gyroid nodes.
Tsallis non-extensive statistics and solar wind plasma complexity
NASA Astrophysics Data System (ADS)
Pavlos, G. P.; Iliopoulos, A. C.; Zastenker, G. N.; Zelenyi, L. M.; Karakatsanis, L. P.; Riazantseva, M. O.; Xenakis, M. N.; Pavlos, E. G.
2015-03-01
This article presents novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which took place on 26th September 2011. Solar wind plasma is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields (B → , E →) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992).
NASA Astrophysics Data System (ADS)
Hughes, Ifan G.
2018-03-01
There is extensive use of monochromatic lasers to select atoms with a narrow range of velocities in many atomic physics experiments. For the commonplace situation of the inhomogeneous Doppler-broadened (Gaussian) linewidth exceeding the homogeneous (Lorentzian) natural linewidth by typically two orders of magnitude, a substantial narrowing of the velocity class of atoms interacting with the light can be achieved. However, this is not always the case, and here we show that for a certain parameter regime there is essentially no selection - all of the atoms interact with the light in accordance with the velocity probability density. An explanation of this effect is provided, emphasizing the importance of the long tail of the constituent Lorentzian distribution in a Voigt profile.
Decaying two-dimensional turbulence in a circular container.
Schneider, Kai; Farge, Marie
2005-12-09
We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.
NASA Astrophysics Data System (ADS)
Auvinen, Jussi; Bernhard, Jonah E.; Bass, Steffen A.; Karpenko, Iurii
2018-04-01
We determine the probability distributions of the shear viscosity over the entropy density ratio η /s in the quark-gluon plasma formed in Au + Au collisions at √{sN N}=19.6 ,39 , and 62.4 GeV , using Bayesian inference and Gaussian process emulators for a model-to-data statistical analysis that probes the full input parameter space of a transport + viscous hydrodynamics hybrid model. We find the most likely value of η /s to be larger at smaller √{sN N}, although the uncertainties still allow for a constant value between 0.10 and 0.15 for the investigated collision energy range.
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
NASA Astrophysics Data System (ADS)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
NASA Astrophysics Data System (ADS)
Xin, Wei; Zhao, Yu-Wei; Sudu; Eerdunchaolu
2018-05-01
Considering Hydrogen-like impurity and the thickness effect, the eigenvalues and eigenfunctions of the electronic ground and first exited states in a quantum dot (QD) are derived by using the Lee-Low-Pins-Pekar variational method with the harmonic and Gaussian potentials as the transverse and longitudinal confinement potentials, respectively. A two-level system is constructed on the basis of those two states, and the electronic quantum transition affected by an electromagnetic field is discussed in terms of the two-level system theory. The results indicate the Gaussian potential reflects the real confinement potential more accurately than the parabolic one; the influence of the thickness of the QD on the electronic transition probability is interesting and significant, and cannot be ignored; the electronic transition probability Γ is influenced significantly by some physical quantities, such as the strength of the electron-phonon coupling α, the electric-field strength F, the magnetic-field cyclotron frequency ωc , the barrier height V0 and confinement range L of the asymmetric Gaussian potential, suggesting the transport and optical properties of the QD can be manipulated further though those physical quantities.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Quantum diffusion during inflation and primordial black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattison, Chris; Assadullahi, Hooshyar; Wands, David
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less
Quantum diffusion during inflation and primordial black holes
NASA Astrophysics Data System (ADS)
Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David
2017-10-01
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu
2017-07-15
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
NASA Astrophysics Data System (ADS)
Tsilifis, Panagiotis; Ghanem, Roger G.
2017-07-01
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Antibunching and unconventional photon blockade with Gaussian squeezed states
NASA Astrophysics Data System (ADS)
Lemonde, Marc-Antoine; Didier, Nicolas; Clerk, Aashish A.
2014-12-01
Photon antibunching is a quantum phenomenon typically observed in strongly nonlinear systems where photon blockade suppresses the probability of detecting two photons at the same time. Antibunching has also been reported with Gaussian states, where optimized amplitude squeezing yields classically forbidden values of the intensity correlation, g(2 )(0 ) <1 . As a consequence, observation of antibunching is not necessarily a signature of photon-photon interactions. To clarify the significance of the intensity correlations, we derive a sufficient condition for deducing whether a field is non-Gaussian based on a g(2 )(0 ) measurement. We then show that the Gaussian antibunching obtained with a degenerate parametric amplifier is close to the ideal case reached using dissipative squeezing protocols. We finally shed light on the so-called unconventional photon blockade effect predicted in a driven two-cavity setup with surprisingly weak Kerr nonlinearities, stressing that it is a particular realization of optimized Gaussian amplitude squeezing.
Football fever: goal distributions and non-Gaussian statistics
NASA Astrophysics Data System (ADS)
Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.
2009-02-01
Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.
NASA Astrophysics Data System (ADS)
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.
NASA Astrophysics Data System (ADS)
Fukuda, Kunito; Asakawa, Naoki
2017-02-01
Reported is the observation of dark spin-dependent electrical conduction in a Schottky barrier diode with pentacene (PSBD) using electrically detected magnetic resonance at room temperature. It is suggested that spin-dependent conduction exists in pentacene thin films, which is explored by examining the anisotropic linewidth of the EDMR signal and current density-voltage (J-V) measurements. The EDMR spectrum can be decomposed to Gaussian and Lorentzian components. The dependency of the two signals on the applied voltage was consistent with the current density-voltage (J-V) of the PSBD rather than that of the electron-only device of Al/pentacene/Al, indicating that the spin-dependent conduction is due to bipolaron formation associated with hole polaronic hopping processes. The applied-voltage dependence of the ratio of intensity of the Gaussian line to the Lorentzian may infer that increasing current density should make conducting paths more dispersive, thereby resulting in an increased fraction of the Gaussian line due to the higher dispersive g-factor.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
Flat-top beam for laser-stimulated pain
NASA Astrophysics Data System (ADS)
McCaughey, Ryan; Nadeau, Valerie; Dickinson, Mark
2005-04-01
One of the main problems during laser stimulation in human pain research is the risk of tissue damage caused by excessive heating of the skin. This risk has been reduced by using a laser beam with a flattop (or superGaussian) intensity profile, instead of the conventional Gaussian beam. A finite difference approximation to the heat conduction equation has been applied to model the temperature distribution in skin as a result of irradiation by flattop and Gaussian profile CO2 laser beams. The model predicts that a 15 mm diameter, 15 W, 100 ms CO2 laser pulse with an order 6 superGaussian profile produces a maximum temperature 6 oC less than a Gaussian beam with the same energy density. A superGaussian profile was created by passing a Gaussian beam through a pair of zinc selenide aspheric lenses which refract the more intense central region of the beam towards the less intense periphery. The profiles of the lenses were determined by geometrical optics. In human pain trials the superGaussian beam required more power than the Gaussian beam to reach sensory and pain thresholds.
NASA Astrophysics Data System (ADS)
Shukri, Seyfan Kelil
2017-01-01
We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
Simulations of Gaussian electron guns for RHIC electron lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pikin, A.
Simulations of two versions of the electron gun for RHIC electron lens are presented. The electron guns have to generate an electron beam with Gaussian radial profile of the electron beam density. To achieve the Gaussian electron emission profile on the cathode we used a combination of the gun electrodes and shaping of the cathode surface. Dependence of electron gun performance parameters on the geometry of electrodes and the margins for electrodes positioning are presented.
PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules
NASA Astrophysics Data System (ADS)
Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.
2018-03-01
The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.
Raman-Scattering Line Profiles of the Symbiotic Star AG Peg
NASA Astrophysics Data System (ADS)
Lee, Seong-Jae; Hyung, Siek
2017-06-01
The high dispersion Hα and Hβ line profiles of the Symbiotic star AG Peg consist of top double Gaussian and bottom components. We investigated the formation of the broad wings with Raman scattering mechanism. Adopting the same physical parameters from the photo-ionization study of Kim and Hyung (2008) for the white dwarf and the ionized gas shell, Monte Carlo simulations were carried out for a rotating accretion disk geometry of non-symmetrical latitude angles from -7° < θ < +7° to -16° < θ < +16°. The smaller latitude angle of the disk corresponds to the approaching side of the disk responsible for weak blue Gaussian profile, while the wider latitude angle corresponds to the other side of the disk responsible for the strong red Gaussian profile. We confirmed that the shell has the high gas density ˜ 109.85 cm-3 in the ionized zone of AG Peg derived in the previous photo-ionization model study. The simulation with various HI shell column densities (characterized by a thickness ΔD × gas number density nH) shows that the HI gas shell with a column density Hhi ≈ 3 - 5 × 1019 cm-2 fits the observed line profiles well. The estimated rotation speed of the accretion disk shell is in the range of 44 - 55 kms-1. We conclude that the kinematically incoherent structure involving the outflowing gas from the giant star caused an asymmetry of the disk and double Gaussian profiles found in AG Peg.
NASA Technical Reports Server (NTRS)
Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew
2013-01-01
High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.
Super-resolving random-Gaussian apodized photon sieve.
Sabatyan, Arash; Roshaninejad, Parisa
2012-09-10
A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.
Diffraction of cosine-Gaussian-correlated Schell-model beams.
Pan, Liuzhan; Ding, Chaoliang; Wang, Haixia
2014-05-19
The expression of spectral density of cosine-Gaussian-correlated Schell-model (CGSM) beams diffracted by an aperture is derived, and used to study the changes in the spectral density distribution of CGSM beams upon propagation, where the effect of aperture diffraction is emphasized. It is shown that, comparing with that of GSM beams, the spectral density distribution of CGSM beams diffracted by an aperture has dip and shows dark hollow intensity distribution when the order-parameter n is big enough. The central intensity increases with increasing truncation parameter of aperture. The comparative study of spectral density distributions of CGSM beams with aperture and that of without aperture is performed. Furthermore, the effect of order-parameter n and spatial coherence of CGSM beams on the spectral density distribution is discussed in detail. The results obtained may be useful in optical particulate manipulation.
On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.
2011-01-01
This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.
NASA Astrophysics Data System (ADS)
Bezák, V.
2003-02-01
The Waxman-Peck theory of population genetics is discussed in regard of soil bacteria. Each bacterium is understood as a carrier of a phenotypic parameter p. The central objective is the calculation of the probability density with respect to p, Φ(p,t;p0), of the carriers living at time t>0, provided that initially at t0=0, all bacteria carried the phenotypic parameter p0=0. The theory involves two small parameters: the mutation probability μ and a parameter γ involved in a function w(p) defining the fitness of the bacteria to survive the generation time τ and give birth to an offspring. The mutation from a state p to a state q is defined by a Gaussian with a dispersion σ2m. The author focuses our attention on a function φ(p,t) which determines uniquely the function Φ(p,t;p0) and satisfies a linear equation (Waxman’s equation). The Green function of this equation is mathematically identical with the one-particle Bloch density matrix, where μ characterizes the order of magnitude of the potential energy. (In the x representation, the potential energy is proportional to the inverted Gaussian with the dispersion σ2m). The author solves Waxman’s equation in the standard style of a perturbation theory and discusses how the solution depends on the choice of the fitness function w(p). In a sense, the function c(p)=1-w(p)/w(0) is analogous to the dispersion function E(p) of fictitious quasiparticles. In contrast to Waxman’s approximation, where c(p) was taken as a quadratic function, c(p)≈γp2, the author exemplifies the problem with another function, c(p)=γ[1-exp(-ap2)], where γ is small but a may be large. The author shows that the use of this function in the theory of the population genetics is the same as the use of a nonparabolic dispersion law E=E(p) in the density-matrix theory. With a general function c(p), the distribution function Φ(p,t;0) is composed of a δ-function component, N(t)δ(p), and a blurred component. When discussing the limiting transition for t→∞, the author shows that his function c(p) implies that N(t)→N(∞)≠0 in contrast with the asymptotics N(t)→0 resulting from the use of Waxman’s function c(p)˜p2.
Universality of local dissipation scales in buoyancy-driven turbulence.
Zhou, Quan; Xia, Ke-Qing
2010-03-26
We report an experimental investigation of the local dissipation scale field eta in turbulent thermal convection. Our results reveal two types of universality of eta. The first one is that, for the same flow, the probability density functions (PDFs) of eta are insensitive to turbulent intensity and large-scale inhomogeneity and anisotropy of the system. The second is that the small-scale dissipation dynamics in buoyancy-driven turbulence can be described by the same models developed for homogeneous and isotropic turbulence. However, the exact functional form of the PDF of the local dissipation scale is not universal with respect to different types of flows, but depends on the integral-scale velocity boundary condition, which is found to have an exponential, rather than Gaussian, distribution in turbulent Rayleigh-Bénard convection.
Correlated continuous time random walk and option pricing
NASA Astrophysics Data System (ADS)
Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao
2016-04-01
In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin
2016-10-01
An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.
Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1992-01-01
In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.
NASA Astrophysics Data System (ADS)
Ding, Jian; Li, Li
2018-05-01
We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.
NASA Astrophysics Data System (ADS)
Ding, Jian; Li, Li
2018-06-01
We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
Gaussian and Airy wave packets of massive particles with orbital angular momentum
NASA Astrophysics Data System (ADS)
Karlovets, Dmitry V.
2015-01-01
While wave-packet solutions for relativistic wave equations are oftentimes thought to be approximate (paraxial), we demonstrate, by employing a null-plane- (light-cone-) variable formalism, that there is a family of such solutions that are exact. A scalar Gaussian wave packet in the transverse plane is generalized so that it acquires a well-defined z component of the orbital angular momentum (OAM), while it may not acquire a typical "doughnut" spatial profile. Such quantum states and beams, in contrast to the Bessel states, may have an azimuthal-angle-dependent probability density and finite uncertainty of the OAM, which is determined by the packet's width. We construct a well-normalized Airy wave packet, which can be interpreted as a one-particle state for a relativistic massive boson, show that its center moves along the same quasiclassical straight path, and, which is more important, spreads with time and distance exactly as a Gaussian wave packet does, in accordance with the uncertainty principle. It is explained that this fact does not contradict the well-known "nonspreading" feature of the Airy beams. While the effective OAM for such states is zero, its uncertainty (or the beam's OAM bandwidth) is found to be finite, and it depends on the packet's parameters. A link between exact solutions for the Klein-Gordon equation in the null-plane-variable formalism and the approximate ones in the usual approach is indicated; generalizations of these states for a boson in the external field of a plane electromagnetic wave are also presented.
Fractal scaling analysis of groundwater dynamics in confined aquifers
NASA Astrophysics Data System (ADS)
Tu, Tongbi; Ercan, Ali; Kavvas, M. Levent
2017-10-01
Groundwater closely interacts with surface water and even climate systems in most hydroclimatic settings. Fractal scaling analysis of groundwater dynamics is of significance in modeling hydrological processes by considering potential temporal long-range dependence and scaling crossovers in the groundwater level fluctuations. In this study, it is demonstrated that the groundwater level fluctuations in confined aquifer wells with long observations exhibit site-specific fractal scaling behavior. Detrended fluctuation analysis (DFA) was utilized to quantify the monofractality, and multifractal detrended fluctuation analysis (MF-DFA) and multiscale multifractal analysis (MMA) were employed to examine the multifractal behavior. The DFA results indicated that fractals exist in groundwater level time series, and it was shown that the estimated Hurst exponent is closely dependent on the length and specific time interval of the time series. The MF-DFA and MMA analyses showed that different levels of multifractality exist, which may be partially due to a broad probability density distribution with infinite moments. Furthermore, it is demonstrated that the underlying distribution of groundwater level fluctuations exhibits either non-Gaussian characteristics, which may be fitted by the Lévy stable distribution, or Gaussian characteristics depending on the site characteristics. However, fractional Brownian motion (fBm), which has been identified as an appropriate model to characterize groundwater level fluctuation, is Gaussian with finite moments. Therefore, fBm may be inadequate for the description of physical processes with infinite moments, such as the groundwater level fluctuations in this study. It is concluded that there is a need for generalized governing equations of groundwater flow processes that can model both the long-memory behavior and the Brownian finite-memory behavior.
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Realistic continuous-variable quantum teleportation with non-Gaussian resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Anno, F.; De Siena, S.; CNR-INFM Coherentia, Napoli, Italy, and CNISM and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi, SA
2010-01-15
We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resourcesmore » at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.« less
Kota, V K B; Chavda, N D; Sahu, R
2006-04-01
Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2016-12-01
Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.
Adiabatic elimination of inertia of the stochastic microswimmer driven by α -stable noise
NASA Astrophysics Data System (ADS)
Noetel, Joerg; Sokolov, Igor M.; Schimansky-Geier, Lutz
2017-10-01
We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α -stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τϕ, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t ≫τϕ , is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.
Adiabatic elimination of inertia of the stochastic microswimmer driven by α-stable noise.
Noetel, Joerg; Sokolov, Igor M; Schimansky-Geier, Lutz
2017-10-01
We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α-stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τ_{ϕ}, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t≫τ_{ϕ}, is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.
NASA Astrophysics Data System (ADS)
Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid
2013-02-01
We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.
Chen, Chunyi; Yang, Huamin
2016-08-22
The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.
Statistics of Stokes variables for correlated Gaussian fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliyahu, D.
1994-09-01
The joint and marginal probability distribution functions of the Stokes variables are derived for correlated Gaussian fields [an extension of D. Eliyahu, Phys. Rev. E 47, 2881 (1993)]. The statistics depend only on the first moment (averaged) Stokes variables and have a universal form for [ital S][sub 1], [ital S][sub 2], and [ital S][sub 3]. The statistics of the variables describing the Cartesian coordinates of the Poincare sphere are given also.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Subodh; Singh, Ram Kishor, E-mail: ram007kishor@gmail.com; Sharma, R. P.
Terahertz (THz) generation by beating of two co-axial Gaussian laser beams, propagating in ripple density plasma, has been studied when both ponderomotive and relativistic nonlinearities are operative. When the two lasers co-propagate in rippled density plasma, electrons acquire a nonlinear velocity at beat frequency in the direction transverse to the direction of propagation. This nonlinear oscillatory velocity couples with the density ripple to generate a nonlinear current, which in turn generates THz radiation at the difference frequency. The necessary phase matching condition is provided by the density ripple. Relativistic ponderomotive focusing of the two lasers and its effects on yieldmore » of the generated THz amplitude have been discussed. Numerical results show that conversion efficiency of the order of 10{sup −3} can be achieved in the terahertz radiation generation with relativistic ponderomotive focusing.« less
Steady-state distributions of probability fluxes on complex networks
NASA Astrophysics Data System (ADS)
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
NASA Astrophysics Data System (ADS)
Wacks, Daniel; Konstantinou, Ilias; Chakraborty, Nilanjan
2018-04-01
The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar-turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable.
Konstantinou, Ilias; Chakraborty, Nilanjan
2018-01-01
The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar--turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable. PMID:29740257
An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring
NASA Astrophysics Data System (ADS)
Li, J. Y.; Kitanidis, P. K.
2013-12-01
Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.
A Distant, X-Ray Luminous Cluster of Galaxies at Redshift 0.83
NASA Technical Reports Server (NTRS)
Donahue, Megan
1999-01-01
We have observed the most distant (= 0.829) cluster of galaxies in the Einstein Extended Medium Sensitivity Survey (EMSS), with the ASCA and ROSAT satellites. We find an X-ray temperature of 12.3(sup 3.1, sub 2.2) keV for this cluster, and the ROSAT map reveals significant substructure. The high temperature of MS1054-0321 is consistent with both its approximate velocity dispersion, based on the redshifts of 12 cluster members we have obtained at the Keck and the Canada-France-Hawaii telescopes, and with its weak lensing signature. The X-ray temperature of this cluster implies a virial mass approximately 7.4 x 10(exp 14) /h solar mass, if the mean matter density in the universe equals the critical value (OMEGA(sub 0) = 1), or larger if OMEGA(sub 0) < 1. Finding such a hot, massive cluster in the EMSS is extremely improbable if clusters grew from Gaussian perturbations in an OMEGA(sub 0) = 1 universe. Combining the assumptions that OMEGA(sub 0) = 1 and that the initial perturbations were Gaussian with the observed X-ray temperature function at low redshift, we show that this probability of this cluster occurring in the volume sampled by the EMSS is less than a few times 10(exp -5). Nor is MS1054-0321 the only hot cluster at high redshift; the only two other z > 0.5 EMSS clusters already observed with ASCA also have temperatures exceeding 8 keV. Assuming again that the initial perturbations were Gaussian and OMEGA(sub 0) = 1, we find that each one is improbable at the < 10(exp -2) level. These observations, along with the fact that these luminosities and temperatures of the high-z clusters all agree with the low-z L(sub x) - T(sub x) relation, argue strongly that OMEGA(sub 0) < 1. Otherwise, the initial perturbations must be non-Gaussian, if these clusters' temperatures do indeed reflect their gravitational potentials.
Frequency modulation television analysis: Threshold impulse analysis. [with computer program
NASA Technical Reports Server (NTRS)
Hodge, W. H.
1973-01-01
A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
Multiple scattering and the density distribution of a Cs MOT.
Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J
2005-11-28
Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.
Gaussian windows: A tool for exploring multivariate data
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1990-01-01
Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.
Tracer diffusion in a sea of polymers with binding zones: mobile vs. frozen traps.
Samanta, Nairhita; Chakrabarti, Rajarshi
2016-10-19
We use molecular dynamics simulations to investigate the tracer diffusion in a sea of polymers with specific binding zones for the tracer. These binding zones act as traps. Our simulations show that the tracer can undergo normal yet non-Gaussian diffusion under certain circumstances, e.g., when the polymers with traps are frozen in space and the volume fraction and the binding strength of the traps are moderate. In this case, as the tracer moves, it experiences a heterogeneous environment and exhibits confined continuous time random walk (CTRW) like motion resulting in a non-Gaussian behavior. Also the long time dynamics becomes subdiffusive as the number or the binding strength of the traps increases. However, if the polymers are mobile then the tracer dynamics is Gaussian but could be normal or subdiffusive depending on the number and the binding strength of the traps. In addition, with increasing binding strength and number of polymer traps, the probability of the tracer being trapped increases. On the other hand, removing the binding zones does not result in trapping, even at comparatively high crowding. Our simulations also show that the trapping probability increases with the increasing size of the tracer and for a bigger tracer with the frozen polymer background the dynamics is only weakly non-Gaussian but highly subdiffusive. Our observations are in the same spirit as found in many recent experiments on tracer diffusion in polymeric materials and question the validity of using Gaussian theory to describe diffusion in a crowded environment in general.
Upscaling anomalous reactive kinetics (A+B-->C) from pore scale Lagrangian velocity analysis
NASA Astrophysics Data System (ADS)
De Anna, P.; Tartakovsky, A. M.; Le Borgne, T.; Dentz, M.
2011-12-01
Natural flow fields in porous media display a complex spatio-temporal organization due to heterogeneous geological structures at different scales. This multiscale disorder implies anomalous dispersion, mixing and reaction kinetics (Berkowitz et al. RG 2006, Tartakovsky PRE 2010). Here, we focus on the upscaling of anomalous kinetics arising from pore scale, non Gaussian and correlated, velocity distributions. We consider reactive front simulations, where a component A displaces a component B that saturates initially the porous domain. The reactive component C is produced at the dispersive front located at interface between the A and B domains. The simulations are performed with the SPH method. As the mixing zone grows, the total mass of C produced increases with time. The scaling of this evolution with time is different from that which would be obtained from the homogeneous advection dispersion reaction equation. This anomalous kinetics property is related to spatial structure of the reactive mixture, and its evolution with time under the combined action of advective and diffusive processes. We discuss the different scaling regimes arising depending on the dominant process that governs mixing. In order to upscale these processes, we analyze the Lagrangian velocity properties, which are characterized by the non Gaussian distributions and long range temporal correlation. The main origin of these properties is the existence of very low velocity regions where solute particles can remain trapped for a long time. Another source of strong correlation is the channeling of flow in localized high velocity regions, which created finger-like structures in the concentration field. We show the spatial Markovian, and temporal non Markovian, nature of the Lagrangian velocity field. Therefore, an upscaled model can be defined as a correlated Continuous Time Random Walk (Le Borgne et al. PRL 2008). A key feature of this model is the definition of a transition probability density for Lagrangian velocities across a characteristic correlation distance. We quantify this transition probability density from pore scale simulations and use it in the effective stochastic model. In this framework, we investigate the ability of this effective model to represent correctly dispersion and mixing.
Rubinson, K A
1992-01-01
The underlying principles of the kinetics and equilibrium of a solitary sodium channel in the steady state are examined. Both the open and closed kinetics are postulated to result from round-trip excursions from a transition region that separates the openable and closed forms. Exponential behavior of the kinetics can have origins different from small-molecule systems. These differences suggest that the probability density functions (PDFs) that describe the time dependences of the open and closed forms arise from a distribution of rate constants. The distribution is likely to arise from a thermal modulation of the channel structure, and this provides a physical basis for the following three-variable equation: [formula; see text] Here, A0 is a scaling term, k is the mean rate constant, and sigma quantifies the Gaussian spread for the contributions of a range of effective rate constants. The maximum contribution is made by k, with rates faster and slower contributing less. (When sigma, the standard deviation of the spread, goes to zero, then p(f) = A0 e-kt.) The equation is applied to the single-channel steady-state probability density functions for batrachotoxin-treated sodium channels (1986. Keller et al. J. Gen. Physiol. 88: 1-23). The following characteristics are found: (a) The data for both open and closed forms of the channel are fit well with the above equation, which represents a Gaussian distribution of first-order rate processes. (b) The simple relationship [formula; see text] holds for the mean effective rat constants. Or, equivalently stated, the values of P open calculated from the k values closely agree with the P open values found directly from the PDF data. (c) In agreement with the known behavior of voltage-dependent rate constants, the voltage dependences of the mean effective rate constants for the opening and closing of the channel are equal and opposite over the voltage range studied. That is, [formula; see text] "Bursts" are related to the well-known cage effect of solution chemistry. PMID:1312365
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Joint resonant CMB power spectrum and bispectrum estimation
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin
2016-02-01
We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
NASA Astrophysics Data System (ADS)
Christodoulidi, Helen; Bountis, Tassos; Tsallis, Constantino; Drossos, Lambros
2016-12-01
In the present work we study the Fermi-Pasta-Ulam (FPU) β -model involving long-range interactions (LRI) in both the quadratic and quartic potentials, by introducing two independent exponents {α1} and {α2} respectively, which make the forces decay with distance r. Our results demonstrate that weak chaos, in the sense of decreasing Lyapunov exponents, and q-Gaussian probability density functions (pdfs) of sums of the momenta, occurs only when long-range interactions are included in the quartic part. More importantly, for 0≤slant {α2}<1 , we obtain extrapolated values for q\\equiv {{q}∞}>1 , as N\\to ∞ , suggesting that these pdfs persist in that limit. On the other hand, when long-range interactions are imposed only on the quadratic part, strong chaos and purely Gaussian pdfs are always obtained for the momenta. We have also focused on similar pdfs for the particle energies and have obtained q E -exponentials (with q E > 1) when the quartic-term interactions are long-ranged, otherwise we get the standard Boltzmann-Gibbs weight, with q = 1. The values of q E coincide, within small discrepancies, with the values of q obtained by the momentum distributions.
Anomaly detection of microstructural defects in continuous fiber reinforced composites
NASA Astrophysics Data System (ADS)
Bricker, Stephen; Simmons, J. P.; Przybyla, Craig; Hardie, Russell
2015-03-01
Ceramic matrix composites (CMC) with continuous fiber reinforcements have the potential to enable the next generation of high speed hypersonic vehicles and/or significant improvements in gas turbine engine performance due to their exhibited toughness when subjected to high mechanical loads at extreme temperatures (2200F+). Reinforced fiber composites (RFC) provide increased fracture toughness, crack growth resistance, and strength, though little is known about how stochastic variation and imperfections in the material effect material properties. In this work, tools are developed for quantifying anomalies within the microstructure at several scales. The detection and characterization of anomalous microstructure is a critical step in linking production techniques to properties, as well as in accurate material simulation and property prediction for the integrated computation materials engineering (ICME) of RFC based components. It is desired to find statistical outliers for any number of material characteristics such as fibers, fiber coatings, and pores. Here, fiber orientation, or `velocity', and `velocity' gradient are developed and examined for anomalous behavior. Categorizing anomalous behavior in the CMC is approached by multivariate Gaussian mixture modeling. A Gaussian mixture is employed to estimate the probability density function (PDF) of the features in question, and anomalies are classified by their likelihood of belonging to the statistical normal behavior for that feature.
Gaussian noise and time-reversal symmetry in nonequilibrium Langevin models.
Vainstein, M H; Rubí, J M
2007-03-01
We show that in driven systems the Gaussian nature of the fluctuating force and time reversibility are equivalent properties. This result together with the potential condition of the external force drastically restricts the form of the probability distribution function, which can be shown to satisfy time-independent relations. We have corroborated this feature by explicitly analyzing a model for the stretching of a polymer and a model for a suspension of noninteracting Brownian particles in steady flow.
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Campbell, G. S.; Ganzer, V. M.; Joppa, R. G.
1974-01-01
A method is described for generating time histories which model the frequency content and certain non-Gaussian probability characteristics of atmospheric turbulence including the large gusts and patchy nature of turbulence. Methods for time histories using either analog or digital computation are described. A STOL airplane was programmed into a 6-degree-of-freedom flight simulator, and turbulence time histories from several atmospheric turbulence models were introduced. The pilots' reactions are described.
A path integral approach to the Hodgkin-Huxley model
NASA Astrophysics Data System (ADS)
Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando
2017-11-01
To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.
Non-Gaussian noise-weakened stability in a foraging colony system with time delay
NASA Astrophysics Data System (ADS)
Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong
2018-02-01
In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
The large-scale gravitational bias from the quasi-linear regime.
NASA Astrophysics Data System (ADS)
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1975-01-01
Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
De-blending deep Herschel surveys: A multi-wavelength approach
NASA Astrophysics Data System (ADS)
Pearson, W. J.; Wang, L.; van der Tak, F. F. S.; Hurley, P. D.; Burgarella, D.; Oliver, S. J.
2017-07-01
Aims: Cosmological surveys in the far-infrared are known to suffer from confusion. The Bayesian de-blending tool, XID+, currently provides one of the best ways to de-confuse deep Herschel SPIRE images, using a flat flux density prior. This work is to demonstrate that existing multi-wavelength data sets can be exploited to improve XID+ by providing an informed prior, resulting in more accurate and precise extracted flux densities. Methods: Photometric data for galaxies in the COSMOS field were used to constrain spectral energy distributions (SEDs) using the fitting tool CIGALE. These SEDs were used to create Gaussian prior estimates in the SPIRE bands for XID+. The multi-wavelength photometry and the extracted SPIRE flux densities were run through CIGALE again to allow us to compare the performance of the two priors. Inferred ALMA flux densities (FinferALMA), at 870 μm and 1250 μm, from the best fitting SEDs from the second CIGALE run were compared with measured ALMA flux densities (FmeasALMA) as an independent performance validation. Similar validations were conducted with the SED modelling and fitting tool MAGPHYS and modified black-body functions to test for model dependency. Results: We demonstrate a clear improvement in agreement between the flux densities extracted with XID+ and existing data at other wavelengths when using the new informed Gaussian prior over the original uninformed prior. The residuals between FmeasALMA and FinferALMA were calculated. For the Gaussian priors these residuals, expressed as a multiple of the ALMA error (σ), have a smaller standard deviation, 7.95σ for the Gaussian prior compared to 12.21σ for the flat prior; reduced mean, 1.83σ compared to 3.44σ; and have reduced skew to positive values, 7.97 compared to 11.50. These results were determined to not be significantly model dependent. This results in statistically more reliable SPIRE flux densities and hence statistically more reliable infrared luminosity estimates. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei
2018-02-01
For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Microwave backscattering theory and active remote sensing of the ocean surface
NASA Technical Reports Server (NTRS)
Brown, G. S.; Miller, L. S.
1977-01-01
The status is reviewed of electromagnetic scattering theory relative to the interpretation of microwave remote sensing data acquired from spaceborne platforms over the ocean surface. Particular emphasis is given to the assumptions which are either implicit or explicit in the theory. The multiple scale scattering theory developed during this investigation is extended to non-Gaussian surface statistics. It is shown that the important statistic for the case is the probability density function of the small scale heights conditioned on the large scale slopes; this dependence may explain the anisotropic scattering measurements recently obtained with the AAFE Radscat. It is noted that present surface measurements are inadequate to verify or reject the existing scattering theories. Surface measurements are recommended for qualifying sensor data from radar altimeters and scatterometers. Additional scattering investigations are suggested for imaging type radars employing synthetically generated apertures.
Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-07-01
The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
NASA Astrophysics Data System (ADS)
K., Nirmal; A. G., Sreejith; Mathew, Joice; Sarpotdar, Mayuresh; Suresh, Ambily; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant
2016-07-01
We describe the characterization and removal of noises present in the Inertial Measurement Unit (IMU) MPU- 6050, which was initially used in an attitude sensor, and later used in the development of a pointing system for small balloon-borne astronomical payloads. We found that the performance of the IMU degraded with time because of the accumulation of different errors. Using Allan variance analysis method, we identified the different components of noise present in the IMU, and verified the results by the power spectral density analysis (PSD). We tried to remove the high-frequency noise using smooth filters such as moving average filter and then Savitzky Golay (SG) filter. Even though we managed to filter some high-frequency noise, these filters performance wasn't satisfactory for our application. We found the distribution of the random noise present in IMU using probability density analysis and identified that the noise in our IMU was white Gaussian in nature. Hence, we used a Kalman filter to remove the noise and which gave us good performance real time.
PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, J. Berian, E-mail: berian@berkeley.edu
2012-05-20
We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity thatmore » appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.« less
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
NASA Astrophysics Data System (ADS)
Hashemzadeh, M.
2018-01-01
Self-focusing and defocusing of Gaussian laser beams in collisional inhomogeneous plasmas are investigated in the presence of various laser intensities and linear density and temperature ramps. Considering the ponderomotive force and using the momentum transfer and energy equations, the nonlinear electron density is derived. Taking into account the paraxial approximation and nonlinear electron density, a nonlinear differential equation, governing the focusing and defocusing of the laser beam, is obtained. Results show that in the absence of ramps the laser beam is focused between a minimum and a maximum value of laser intensity. For a certain value of laser intensity and initial electron density, the self-focusing process occurs in a temperature range which reaches its maximum at turning point temperature. However, the laser beam is converged in a narrow range for various amounts of initial electron density. It is indicated that the σ2 parameter and its sign can affect the self-focusing process for different values of laser intensity, initial temperature, and initial density. Finally, it is found that although the electron density ramp-down diverges the laser beam, electron density ramp-up improves the self-focusing process.
On the application of Rice's exceedance statistics to atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Chen, W. Y.
1972-01-01
Discrepancies produced by the application of Rice's exceedance statistics to atmospheric turbulence are examined. First- and second-order densities from several data sources have been measured for this purpose. Particular care was paid to each selection of turbulence that provides stationary mean and variance over the entire segment. Results show that even for a stationary segment of turbulence, the process is still highly non-Gaussian, in spite of a Gaussian appearance for its first-order distribution. Data also indicate strongly non-Gaussian second-order distributions. It is therefore concluded that even stationary atmospheric turbulence with a normal first-order distribution cannot be considered a Gaussian process, and consequently the application of Rice's exceedance statistics should be approached with caution.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
The effects of velocities and lensing on moments of the Hubble diagram
NASA Astrophysics Data System (ADS)
Macaulay, E.; Davis, T. M.; Scovacricchi, D.; Bacon, D.; Collett, T.; Nichol, R. C.
2017-05-01
We consider the dispersion on the supernova distance-redshift relation due to peculiar velocities and gravitational lensing, and the sensitivity of these effects to the amplitude of the matter power spectrum. We use the Method-of-the-Moments (MeMo) lensing likelihood developed by Quartin et al., which accounts for the characteristic non-Gaussian distribution caused by lensing magnification with measurements of the first four central moments of the distribution of magnitudes. We build on the MeMo likelihood by including the effects of peculiar velocities directly into the model for the moments. In order to measure the moments from sparse numbers of supernovae, we take a new approach using Kernel density estimation to estimate the underlying probability density function of the magnitude residuals. We also describe a bootstrap re-sampling approach to estimate the data covariance matrix. We then apply the method to the joint light-curve analysis (JLA) supernova catalogue. When we impose only that the intrinsic dispersion in magnitudes is independent of redshift, we find σ _8=0.44^{+0.63}_{-0.44} at the one standard deviation level, although we note that in tests on simulations, this model tends to overestimate the magnitude of the intrinsic dispersion, and underestimate σ8. We note that the degeneracy between intrinsic dispersion and the effects of σ8 is more pronounced when lensing and velocity effects are considered simultaneously, due to a cancellation of redshift dependence when both effects are included. Keeping the model of the intrinsic dispersion fixed as a Gaussian distribution of width 0.14 mag, we find σ _8 = 1.07^{+0.50}_{-0.76}.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
NASA Astrophysics Data System (ADS)
Xiang, Yu; Xu, Buqing; Mišta, Ladislav; Tufarelli, Tommaso; He, Qiongyi; Adesso, Gerardo
2017-10-01
Einstein-Podolsky-Rosen (EPR) steering is an asymmetric form of correlations which is intermediate between quantum entanglement and Bell nonlocality, and can be exploited as a resource for quantum communication with one untrusted party. In particular, steering of continuous-variable Gaussian states has been extensively studied theoretically and experimentally, as a fundamental manifestation of the EPR paradox. While most of these studies focused on quadrature measurements for steering detection, two recent works revealed that there exist Gaussian states which are only steerable by suitable non-Gaussian measurements. In this paper we perform a systematic investigation of EPR steering of bipartite Gaussian states by pseudospin measurements, complementing and extending previous findings. We first derive the density-matrix elements of two-mode squeezed thermal Gaussian states in the Fock basis, which may be of independent interest. We then use such a representation to investigate steering of these states as detected by a simple nonlinear criterion, based on second moments of the correlation matrix constructed from pseudospin operators. This analysis reveals previously unexplored regimes where non-Gaussian measurements are shown to be more effective than Gaussian ones to witness steering of Gaussian states in the presence of local noise. We further consider an alternative set of pseudospin observables, whose expectation value can be expressed more compactly in terms of Wigner functions for all two-mode Gaussian states. However, according to the adopted criterion, these observables are found to be always less sensitive than conventional Gaussian observables for steering detection. Finally, we investigate continuous-variable Werner states, which are non-Gaussian mixtures of Gaussian states, and find that pseudospin measurements are always more effective than Gaussian ones to reveal their steerability. Our results provide useful insights on the role of non-Gaussian measurements in characterizing quantum correlations of Gaussian and non-Gaussian states of continuous-variable quantum systems.
STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp
2015-03-10
Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less
Finding SDSS Galaxy Clusters in 4-dimensional Color Space Using the False Discovery Rate
NASA Astrophysics Data System (ADS)
Nichol, R. C.; Miller, C. J.; Reichart, D.; Wasserman, L.; Genovese, C.; SDSS Collaboration
2000-12-01
We describe a recently developed statistical technique that provides a meaningful cut-off in probability-based decision making. We are concerned with multiple testing, where each test produces a well-defined probability (or p-value). By well-known, we mean that the null hypothesis used to determine the p-value is fully understood and appropriate. The method is entitled False Discovery Rate (FDR) and its largest advantage over other measures is that it allows one to specify a maximal amount of acceptable error. As an example of this tool, we apply FDR to a four-dimensional clustering algorithm using SDSS data. For each galaxy (or test galaxy), we count the number of neighbors that fit within one standard deviation of a four dimensional Gaussian centered on that test galaxy. The mean and standard deviation of that Gaussian are determined from the colors and errors of the test galaxy. We then take that same Gaussian and place it on a random selection of n galaxies and make a similar count. In the limit of large n, we expect the median count around these random galaxies to represent a typical field galaxy. For every test galaxy we determine the probability (or p-value) that it is a field galaxy based on these counts. A low p-value implies that the test galaxy is in a cluster environment. Once we have a p-value for every galaxy, we use FDR to determine at what level we should make our probability cut-off. Once this cut-off is made, we have a final sample of galaxies that are cluster-like galaxies. Using FDR, we also know the maximum amount of field contamination in our cluster galaxy sample. We present our preliminary galaxy clustering results using these methods.
MacKenzie, Donald; Spears, Taylor
2014-06-01
Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
NASA Astrophysics Data System (ADS)
Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk
2015-06-01
The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.
Robust non-Gaussian statistics and long-range correlation of total ozone
NASA Astrophysics Data System (ADS)
Toumi, R.; Syroka, J.; Barnes, C.; Lewis, P.
2001-01-01
Three long-term total ozone time series at Camborne, Lerwick and Arosa are examined for their statistical properties. Non-Gaussian behaviour is seen for all locations. There are large interannual fluctuations in the higher moments of the probability distribution. However, only the mean for all stations and summer standard deviation at Lerwick show significant trends. This suggests that there has been no long-term change in the stratospheric circulation, but there are decadal variations. The time series can be also characterised as scale invariant with a Hurst exponent of about 0.8 for all three sites. The Arosa time series was found to be weakly intermittent, in agreement with the non-Gaussian characteristics of the data set
Coronal loop seismology using damping of standing kink oscillations by mode coupling
NASA Astrophysics Data System (ADS)
Pascoe, D. J.; Goddard, C. R.; Nisticò, G.; Anfinogentov, S.; Nakariakov, V. M.
2016-05-01
Context. Kink oscillations of solar coronal loops are frequently observed to be strongly damped. The damping can be explained by mode coupling on the condition that loops have a finite inhomogeneous layer between the higher density core and lower density background. The damping rate depends on the loop density contrast ratio and inhomogeneous layer width. Aims: The theoretical description for mode coupling of kink waves has been extended to include the initial Gaussian damping regime in addition to the exponential asymptotic state. Observation of these damping regimes would provide information about the structuring of the coronal loop and so provide a seismological tool. Methods: We consider three examples of standing kink oscillations observed by the Atmospheric Imaging Assembly (AIA) of the Solar Dynamics Observatory (SDO) for which the general damping profile (Gaussian and exponential regimes) can be fitted. Determining the Gaussian and exponential damping times allows us to perform seismological inversions for the loop density contrast ratio and the inhomogeneous layer width normalised to the loop radius. The layer width and loop minor radius are found separately by comparing the observed loop intensity profile with forward modelling based on our seismological results. Results: The seismological method which allows the density contrast ratio and inhomogeneous layer width to be simultaneously determined from the kink mode damping profile has been applied to observational data for the first time. This allows the internal and external Alfvén speeds to be calculated, and estimates for the magnetic field strength can be dramatically improved using the given plasma density. Conclusions: The kink mode damping rate can be used as a powerful diagnostic tool to determine the coronal loop density profile. This information can be used for further calculations such as the magnetic field strength or phase mixing rate.
NASA Technical Reports Server (NTRS)
Isar, Aurelian
1995-01-01
The harmonic oscillator with dissipation is studied within the framework of the Lindblad theory for open quantum systems. By using the Wang-Uhlenbeck method, the Fokker-Planck equation, obtained from the master equation for the density operator, is solved for the Wigner distribution function, subject to either the Gaussian type or the delta-function type of initial conditions. The obtained Wigner functions are two-dimensional Gaussians with different widths. Then a closed expression for the density operator is extracted. The entropy of the system is subsequently calculated and its temporal behavior shows that this quantity relaxes to its equilibrium value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Daniel; Horn, Bart; /SLAC /Stanford U., Phys. Dept.
2009-06-19
We analyze a distinctive mechanism for inflation in which particle production slows down a scalar field on a steep potential, and show how it descends from angular moduli in string compactifications. The analysis of density perturbations - taking into account the integrated effect of the produced particles and their quantum fluctuations - requires somewhat new techniques that we develop. We then determine the conditions for this effect to produce sixty e-foldings of inflation with the correct amplitude of density perturbations at the Gaussian level, and show that these requirements can be straightforwardly satisfied. Finally, we estimate the amplitude of themore » non-Gaussianity in the power spectrum and find a significant equilateral contribution.« less
Numerical modeling on carbon fiber composite material in Gaussian beam laser based on ANSYS
NASA Astrophysics Data System (ADS)
Luo, Ji-jun; Hou, Su-xia; Xu, Jun; Yang, Wei-jun; Zhao, Yun-fang
2014-02-01
Based on the heat transfer theory and finite element method, the macroscopic ablation model of Gaussian beam laser irradiated surface is built and the value of temperature field and thermal ablation development is calculated and analyzed rationally by using finite element software of ANSYS. Calculation results show that the ablating form of the materials in different irritation is of diversity. The laser irradiated surface is a camber surface rather than a flat surface, which is on the lowest point and owns the highest power density. Research shows that the higher laser power density absorbed by material surface, the faster the irritation surface regressed.
Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model
NASA Astrophysics Data System (ADS)
Mankin, Romi; Paekivi, Sander
2018-01-01
The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem.
Burnecki, Krzysztof; Wylomanska, Agnieszka; Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem
Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov–Smirnov test. In particular, it helps to distinguish between stable and Student’s t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition. PMID:26698863
Potential landscape and flux field theory for turbulence and nonequilibrium fluid systems
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhang, Feng; Wang, Jin
2018-02-01
Turbulence is a paradigm for far-from-equilibrium systems without time reversal symmetry. To capture the nonequilibrium irreversible nature of turbulence and investigate its implications, we develop a potential landscape and flux field theory for turbulent flow and more general nonequilibrium fluid systems governed by stochastic Navier-Stokes equations. We find that equilibrium fluid systems with time reversibility are characterized by a detailed balance constraint that quantifies the detailed balance condition. In nonequilibrium fluid systems with nonequilibrium steady states, detailed balance breaking leads directly to a pair of interconnected consequences, namely, the non-Gaussian potential landscape and the irreversible probability flux, forming a 'nonequilibrium trinity'. The nonequilibrium trinity characterizes the nonequilibrium irreversible essence of fluid systems with intrinsic time irreversibility and is manifested in various aspects of these systems. The nonequilibrium stochastic dynamics of fluid systems including turbulence with detailed balance breaking is shown to be driven by both the non-Gaussian potential landscape gradient and the irreversible probability flux, together with the reversible convective force and the stochastic stirring force. We reveal an underlying connection of the energy flux essential for turbulence energy cascade to the irreversible probability flux and the non-Gaussian potential landscape generated by detailed balance breaking. Using the energy flux as a center of connection, we demonstrate that the four-fifths law in fully developed turbulence is a consequence and reflection of the nonequilibrium trinity. We also show how the nonequilibrium trinity can affect the scaling laws in turbulence.
1984-08-01
12. PERSONAL AUTHORISI Hiroshi Sato 13* TYPE OF REPORT TECHNICAL 13b. TIME COVERED PROM TO 14. OATE OF REPORT (Yr. Mo., Day) Aug. 1984...nectuary and identify by bloc* number) Let p and p.. be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1...THIS PAGE ABSTRACT Let y and y1 be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1] posed the
Predicting structures in the Zone of Avoidance
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan
2017-11-01
The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.
Kobayashi, Amane; Sekiguchi, Yuki; Takayama, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2014-11-17
Coherent X-ray diffraction imaging (CXDI) is a lensless imaging technique that is suitable for visualizing the structures of non-crystalline particles with micrometer to sub-micrometer dimensions from material science and biology. One of the difficulties inherent to CXDI structural analyses is the reconstruction of electron density maps of specimen particles from diffraction patterns because saturated detector pixels and a beam stopper result in missing data in small-angle regions. To overcome this difficulty, the dark-field phase-retrieval (DFPR) method has been proposed. The DFPR method reconstructs electron density maps from diffraction data, which are modified by multiplying Gaussian masks with an observed diffraction pattern in the high-angle regions. In this paper, we incorporated Friedel centrosymmetry for diffraction patterns into the DFPR method to provide a constraint for the phase-retrieval calculation. A set of model simulations demonstrated that this constraint dramatically improved the probability of reconstructing correct electron density maps from diffraction patterns that were missing data in the small-angle region. In addition, the DFPR method with the constraint was applied successfully to experimentally obtained diffraction patterns with significant quantities of missing data. We also discuss this method's limitations with respect to the level of Poisson noise in X-ray detection.
Magnetism in all-carbon nanostructures with negative Gaussian curvature.
Park, Noejung; Yoon, Mina; Berber, Savas; Ihm, Jisoon; Osawa, Eiji; Tománek, David
2003-12-05
We apply the ab initio spin density functional theory to study magnetism in all-carbon nanostructures. We find that particular systems, which are related to schwarzite and contain no undercoordinated carbon atoms, carry a net magnetic moment in the ground state. We postulate that, in this and other nonalternant aromatic systems with negative Gaussian curvature, unpaired spins can be introduced by sterically protected carbon radicals.
NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.
Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
Wigner molecules: the strong-correlation limit of the three-electron harmonium.
Cioslowski, Jerzy; Pernal, Katarzyna
2006-08-14
At the strong-correlation limit, electronic states of the three-electron harmonium atom are described by asymptotically exact wave functions given by products of distinct Slater determinants and a common Gaussian factor that involves interelectron distances and the center-of-mass position. The Slater determinants specify the angular dependence and the permutational symmetry of the wave functions. As the confinement strength becomes infinitesimally small, the states of different spin multiplicities become degenerate, their limiting energy reflecting harmonic vibrations of the electrons about their equilibrium positions. The corresponding electron densities are given by products of angular factors and a Gaussian function centered at the radius proportional to the interelectron distance at equilibrium. Thanks to the availability of both the energy and the electron density, the strong-correlation limit of the three-electron harmonium is well suited for testing of density functionals.
Efficient and robust computation of PDF features from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2009-10-01
We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
Magnetic Field Fluctuations in Saturn's Magnetosphere
NASA Astrophysics Data System (ADS)
von Papen, Michael; Saur, Joachim; Alexandrova, Olga
2013-04-01
In the framework of turbulence, we analyze the statistical properties of magnetic field fluctuations measured by the Cassini spacecraft inside Saturn's plasma sheet. In the spacecraft-frame power spectra of the fluctuations we identify two power-law spectral ranges seperated by a spectral break around ion gyro-frequencies of O+ and H+. The spectral indices of the low frequency power-law are found to be between 5-3 (for fully developed cascades) and 1 (during energy input on the corresponding scales). Above the spectral break there is a constant power-law with mean spectral index ~2.5 indicating a permament turbulent cascade in the kinetic range. An increasing non-gaussian probability density with frequency indicates the build-up of intermittency. Correlations of plasma parameters with the spectral indices are examined and it is found that the power-law slope depends on background magnetic field strength and plasma beta.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Performance of correlation receivers in the presence of impulse noise.
NASA Technical Reports Server (NTRS)
Moore, J. D.; Houts, R. C.
1972-01-01
An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
NASA Astrophysics Data System (ADS)
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.
Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J
2006-09-01
We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Vipin K.; Sharma, Anamika
2013-05-15
We estimate the ponderomotive force on an expanded inhomogeneous electron density profile, created in the later phase of laser irradiated diamond like ultrathin foil. When ions are uniformly distributed along the plasma slab and electron density obeys the Poisson's equation with space charge potential equal to negative of ponderomotive potential, φ=−φ{sub p}=−(mc{sup 2}/e)(γ−1), where γ=(1+|a|{sup 2}){sup 1/2}, and |a| is the normalized local laser amplitude inside the slab; the net ponderomotive force on the slab per unit area is demonstrated analytically to be equal to radiation pressure force for both overdense and underdense plasmas. In case electron density is takenmore » to be frozen as a Gaussian profile with peak density close to relativistic critical density, the ponderomotive force has non-monotonic spatial variation and sums up on all electrons per unit area to equal radiation pressure force at all laser intensities. The same result is obtained for the case of Gaussian ion density profile and self consistent electron density profile, obeying Poisson's equation with φ=−φ{sub p}.« less
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
NASA Astrophysics Data System (ADS)
Gao, Li
2015-07-01
We study the evolution of the distribution of consumption of individuals in the majority population in China during the period 1995-2012 and find that its probability density functions (PDFs) obey the rule Pc(x) = K(x - μ) e-(x - μ)2/2σ2. We also find (i) that the PDFs and the individual income distribution appear to be identical, (ii) that the peaks of the PDFs of the individual consumption distribution are consistently on the low side of the PDFs of the income distribution, and (iii) that the average of the marginal propensity to consume (MPC) is large, MPC bar = 0.77, indicating that in the majority population individual consumption is low and strongly dependent on income. The long right tail of the PDFs of consumption indicates that few people in China are participating in the high consumption economy, and that consumption inequality is high. After comparing the PDFs of consumption with the PDFs of income we obtain the PDFs of residual wealth during the period 1995-2012, which exhibit a Gaussian distribution. We use an agent-based kinetic wealth-exchange model (KWEM) to simulate this evolutional process and find that this Gaussian distribution indicates a strong propensity to save rather than spend. This may be due to an anticipation of such large potential outlays as housing, education, and health care in the context of an inadequate welfare support system.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Estimating crustal heterogeneity from double-difference tomography
Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.
2006-01-01
Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
Charged particle dynamics in the presence of non-Gaussian Lévy electrostatic fluctuations
Del-Castillo-Negrete, Diego B.; Moradi, Sara; Anderson, Johan
2016-09-01
Full orbit dynamics of charged particles in a 3-dimensional helical magnetic field in the presence of -stable Levy electrostatic fluctuations and linear friction modeling collisional Coulomb drag is studied via Monte Carlo numerical simulations. The Levy fluctuations are introduced to model the effect of non-local transport due to fractional diffusion in velocity space resulting from intermittent electrostatic turbulence. The probability distribution functions of energy, particle displacements, and Larmor radii are computed and showed to exhibit a transition from exponential decay, in the case of Gaussian fluctuations, to power law decay in the case of Levy fluctuations. The absolute value ofmore » the power law decay exponents are linearly proportional to the Levy index. Furthermore, the observed anomalous non-Gaussian statistics of the particles' Larmor radii (resulting from outlier transport events) indicate that, when electrostatic turbulent fluctuations exhibit non-Gaussian Levy statistics, gyro-averaging and guiding centre approximations might face limitations and full particle orbit effects should be taken into account.« less
Charged particle dynamics in the presence of non-Gaussian Lévy electrostatic fluctuations
NASA Astrophysics Data System (ADS)
Moradi, Sara; del-Castillo-Negrete, Diego; Anderson, Johan
2016-09-01
Full orbit dynamics of charged particles in a 3-dimensional helical magnetic field in the presence of α-stable Lévy electrostatic fluctuations and linear friction modeling collisional Coulomb drag is studied via Monte Carlo numerical simulations. The Lévy fluctuations are introduced to model the effect of non-local transport due to fractional diffusion in velocity space resulting from intermittent electrostatic turbulence. The probability distribution functions of energy, particle displacements, and Larmor radii are computed and showed to exhibit a transition from exponential decay, in the case of Gaussian fluctuations, to power law decay in the case of Lévy fluctuations. The absolute value of the power law decay exponents is linearly proportional to the Lévy index α. The observed anomalous non-Gaussian statistics of the particles' Larmor radii (resulting from outlier transport events) indicate that, when electrostatic turbulent fluctuations exhibit non-Gaussian Lévy statistics, gyro-averaging and guiding centre approximations might face limitations and full particle orbit effects should be taken into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruban, V. P., E-mail: ruban@itp.ac.ru
2015-05-15
The nonlinear dynamics of an obliquely oriented wave packet on a sea surface is analyzed analytically and numerically for various initial parameters of the packet in relation to the problem of the so-called rogue waves. Within the Gaussian variational ansatz applied to the corresponding (1+2)-dimensional hyperbolic nonlinear Schrödinger equation (NLSE), a simplified Lagrangian system of differential equations is derived that describes the evolution of the coefficients of the real and imaginary quadratic forms appearing in the Gaussian. This model provides a semi-quantitative description of the process of nonlinear spatiotemporal focusing, which is one of the most probable mechanisms of roguemore » wave formation in random wave fields. The system of equations is integrated in quadratures, which allows one to better understand the qualitative differences between linear and nonlinear focusing regimes of a wave packet. Predictions of the Gaussian model are compared with the results of direct numerical simulation of fully nonlinear long-crested waves.« less
NASA Astrophysics Data System (ADS)
Cascio, David M.
1988-05-01
States of nature or observed data are often stochastically modelled as Gaussian random variables. At times it is desirable to transmit this information from a source to a destination with minimal distortion. Complicating this objective is the possible presence of an adversary attempting to disrupt this communication. In this report, solutions are provided to a class of minimax and maximin decision problems, which involve the transmission of a Gaussian random variable over a communications channel corrupted by both additive Gaussian noise and probabilistic jamming noise. The jamming noise is termed probabilistic in the sense that with nonzero probability 1-P, the jamming noise is prevented from corrupting the channel. We shall seek to obtain optimal linear encoder-decoder policies which minimize given quadratic distortion measures.
Avram, Alexandru V; Sarlls, Joelle E; Barnett, Alan S; Özarslan, Evren; Thomas, Cibu; Irfanoglu, M Okan; Hutchinson, Elizabeth; Pierpaoli, Carlo; Basser, Peter J
2016-02-15
Diffusion tensor imaging (DTI) is the most widely used method for characterizing noninvasively structural and architectural features of brain tissues. However, the assumption of a Gaussian spin displacement distribution intrinsic to DTI weakens its ability to describe intricate tissue microanatomy. Consequently, the biological interpretation of microstructural parameters, such as fractional anisotropy or mean diffusivity, is often equivocal. We evaluate the clinical feasibility of assessing brain tissue microstructure with mean apparent propagator (MAP) MRI, a powerful analytical framework that efficiently measures the probability density function (PDF) of spin displacements and quantifies useful metrics of this PDF indicative of diffusion in complex microstructure (e.g., restrictions, multiple compartments). Rotation invariant and scalar parameters computed from the MAP show consistent variation across neuroanatomical brain regions and increased ability to differentiate tissues with distinct structural and architectural features compared with DTI-derived parameters. The return-to-origin probability (RTOP) appears to reflect cellularity and restrictions better than MD, while the non-Gaussianity (NG) measures diffusion heterogeneity by comprehensively quantifying the deviation between the spin displacement PDF and its Gaussian approximation. Both RTOP and NG can be decomposed in the local anatomical frame for reference determined by the orientation of the diffusion tensor and reveal additional information complementary to DTI. The propagator anisotropy (PA) shows high tissue contrast even in deep brain nuclei and cortical gray matter and is more uniform in white matter than the FA, which drops significantly in regions containing crossing fibers. Orientational profiles of the propagator computed analytically from the MAP MRI series coefficients allow separation of different fiber populations in regions of crossing white matter pathways, which in turn improves our ability to perform whole-brain fiber tractography. Reconstructions from subsampled data sets suggest that MAP MRI parameters can be computed from a relatively small number of DWIs acquired with high b-value and good signal-to-noise ratio in clinically achievable scan durations of less than 10min. The neuroanatomical consistency across healthy subjects and reproducibility in test-retest experiments of MAP MRI microstructural parameters further substantiate the robustness and clinical feasibility of this technique. The MAP MRI metrics could potentially provide more sensitive clinical biomarkers with increased pathophysiological specificity compared to microstructural measures derived using conventional diffusion MRI techniques. Published by Elsevier Inc.
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
Mitri, F G; Fellah, Z E A
2014-01-01
The present analysis investigates the (axial) acoustic radiation force induced by a quasi-Gaussian beam centered on an elastic and a viscoelastic (polymer-type) sphere in a nonviscous fluid. The quasi-Gaussian beam is an exact solution of the source free Helmholtz wave equation and is characterized by an arbitrary waist w₀ and a diffraction convergence length known as the Rayleigh range z(R). Examples are found where the radiation force unexpectedly approaches closely to zero at some of the elastic sphere's resonance frequencies for kw₀≤1 (where this range is of particular interest in describing strongly focused or divergent beams), which may produce particle immobilization along the axial direction. Moreover, the (quasi)vanishing behavior of the radiation force is found to be correlated with conditions giving extinction of the backscattering by the quasi-Gaussian beam. Furthermore, the mechanism for the quasi-zero force is studied theoretically by analyzing the contributions of the kinetic, potential and momentum flux energy densities and their density functions. It is found that all the components vanish simultaneously at the selected ka values for the nulls. However, for a viscoelastic sphere, acoustic absorption degrades the quasi-zero radiation force. Copyright © 2013 Elsevier B.V. All rights reserved.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Rockfall travel distances theoretical distributions
NASA Astrophysics Data System (ADS)
Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea
2017-04-01
The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2006-01-01
The abilities of cloud-resolving models (CRMs) with the double-Gaussian based and the single-Gaussian based third-order closures (TOCs) to simulate the shallow cumuli and their transition to deep convective clouds are compared in this study. The single-Gaussian based TOC is fully prognostic (FP), while the double-Gaussian based TOC is partially prognostic (PP). The latter only predicts three important third-order moments while the former predicts all the thirdorder moments. A shallow cumulus case is simulated by single-column versions of the FP and PP TOC models. The PP TOC improves the simulation of shallow cumulus greatly over the FP TOC by producing more realistic cloud structures. Large differences between the FP and PP TOC simulations appear in the cloud layer of the second- and third-order moments, which are related mainly to the underestimate of the cloud height in the FP TOC simulation. Sensitivity experiments and analysis of probability density functions (PDFs) used in the TOCs show that both the turbulence-scale condensation and higher-order moments are important to realistic simulations of the boundary-layer shallow cumuli. A shallow to deep convective cloud transition case is also simulated by the 2-D versions of the FP and PP TOC models. Both CRMs can capture the transition from the shallow cumuli to deep convective clouds. The PP simulations produce more and deeper shallow cumuli than the FP simulations, but the FP simulations produce larger and wider convective clouds than the PP simulations. The temporal evolutions of cloud and precipitation are closely related to the turbulent transport, the cold pool and the cloud-scale circulation. The large amount of turbulent mixing associated with the shallow cumuli slows down the increase of the convective available potential energy and inhibits the early transition to deep convective clouds in the PP simulation. When the deep convective clouds fully develop and the precipitation is produced, the cold pools produced by the evaporation of the precipitation are not favorable to the formation of shallow cumuli.
A Very Hot, High Redshift Cluster of Galaxies: More Trouble for Omega(0) = 1
NASA Technical Reports Server (NTRS)
Donahue, Megan; Voit, G. Mark; Gioia, Isabella; Luppino, Gerry; Hughes, John P.; Stocke, John T.
1998-01-01
We have observed the most distant (= 0.829) cluster of galaxies in the Einstein Extended Medium Sensitivity Survey (EMSS), with the ASCA and ROSAT satellites. We find an X-ray temperature of 12.3 (sup +3.1) (sub -2.2)keV for this cluster, and the ROSAT map reveals significant substructure. The high temperature of MS1054-0321 is consistent with both its approximate velocity dispersion, based on the redshifts of 12 cluster members we have obtained at the Keck and the Canada-France-Hawaii telescopes, and with its weak lensing signature. The X-ray temperature of this cluster implies a virial mass approx. 7.4 x 10 (sup 14) h (sup -1) M (circle dot), if the mean matter density in the universe equals the critical value (OMEGA (sub 0) = 1), or larger if OMEGA (sub 0) is less than 1. Finding such a hot, massive cluster in the EMSS is extremely improbable if clusters grew from Gaussian perturbations in an OMEGA (sub 0) = 1 universe. Combining the assumptions that OMEGA (sub 0) = 1 and that the initial perturbations were Gaussian with the observed X-ray temperature function at low redshift, we show that this probability of this cluster occurring in the volume sampled by the EMSS is less than a few times 10 (sup -5). Nor is MS1054-0321 the only hot cluster at high redshift; the only two other z greater than 0.5 EMSS clusters already observed with ASCA also have temperatures exceeding 8 keV. Assuming again that the initial perturbations were Gaussian and OMEGA (sub 0) = 1, we find that each one is improbable at the less than 10 (sup -2) level. These observations, along with the fact that these luminosities and temperatures of the high-z clusters all agree with the low-z L (sub X) - T (sub X) relation, argue strongly that OMEGA (sub 0) less than 1. Otherwise, the initial perturbations must be non-Gaussian, if these clusters' temperatures do indeed reflect their gravitational potentials.
Voice-onset time and buzz-onset time identification: A ROC analysis
NASA Astrophysics Data System (ADS)
Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.
2004-05-01
Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
NASA Astrophysics Data System (ADS)
Xu, Xin; Zhang, Qingsong; Muller, Richard P.; Goddard, William A.
2005-01-01
We derive here the form for the exact exchange energy density for a density that decays with Gaussian-type behavior at long range. This functional is intermediate between the B88 and the PW91 exchange functionals. Using this modified functional to match the form expected for Gaussian densities, we propose the X3LYP extended functional. We find that X3LYP significantly outperforms Becke three parameter Lee-Yang-Parr (B3LYP) for describing van der Waals and hydrogen bond interactions, while performing slightly better than B3LYP for predicting heats of formation, ionization potentials, electron affinities, proton affinities, and total atomic energies as validated with the extended G2 set of atoms and molecules. Thus X3LYP greatly enlarges the field of applications for density functional theory. In particular the success of X3LYP in describing the water dimer (with Re and De within the error bars of the most accurate determinations) makes it an excellent candidate for predicting accurate ligand-protein and ligand-DNA interactions.
Analyzing phenological extreme events over the past five decades in Germany
NASA Astrophysics Data System (ADS)
Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp
2010-05-01
As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.
NASA Astrophysics Data System (ADS)
Wolfsteiner, Peter; Breuer, Werner
2013-10-01
The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.
Song, Jong-Won; Hirao, Kimihiko
2015-10-14
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.
NASA Astrophysics Data System (ADS)
Bagchi, Debarshee; Tsallis, Constantino
2017-04-01
The relaxation to equilibrium of two long-range-interacting Fermi-Pasta-Ulam-like models (β type) in thermal contact is numerically studied. These systems, with different sizes and energy densities, are coupled to each other by a few thermal contacts which are short-range harmonic springs. By using the kinetic definition of temperature, we compute the time evolution of temperature and energy density of the two systems. Eventually, for some time t >teq, the temperature and energy density of the coupled system equilibrate to values consistent with standard Boltzmann-Gibbs thermostatistics. The equilibration time teq depends on the system size N as teq ∼Nγ where γ ≃ 1.8. We compute the velocity distribution P (v) of the oscillators of the two systems during the relaxation process. We find that P (v) is non-Gaussian and is remarkably close to a q-Gaussian distribution for all times before thermal equilibrium is reached. During the relaxation process we observe q > 1 while close to t =teq the value of q converges to unity and P (v) approaches a Gaussian. Thus the relaxation phenomenon in long-ranged systems connected by a thermal contact can be generically described as a crossover from q-statistics to Boltzmann-Gibbs statistics.
On the formation age of the first planetary system
NASA Astrophysics Data System (ADS)
Hara, T.; Kunitomo, S.; Shigeyasu, M.; Kajiura, D.
2008-05-01
Recently, it has been observed the extreme metal-poor stars in the Galactic halo, which must be formed just after Pop III objects. On the other hand, the first gas clouds of mass 106 M are supposed to be formed at z 10, 20, and 30 for the 1σ, 2σ and 3σ, where the density perturbations are assumed of the standard ΛCDM cosmology. Usually it is approximated that the distribution of the density perturbation amplitudes is gaussian where σ means the standard deviation. If we could apply this gaussian distribution to the extreme small probability, the gas clouds would be formed at z 40, 60, and 80 for the 4σ, 6σ, and 8σ where the probabilities are approximately 3 × 10-5, 10-9, and 10-15. Within our universe, there are almost 1016 ( 1022M/106M) clouds of mass 106M. Then the first gas clouds must be formed around z 80, where the time is 20 Myr ( 13.7/(1 + z)3/2 Gyr). Even within our galaxy, there are 105 ( 1011M/106M) clouds, then the first gas clouds within our galaxy must be formed around z 40, where the time is 54 Myr ( 13.7/(1+z)3/2Gyr). The evolution time for massive star ( 102 M) is 3 Myr and the explosion of the massive supernova distributes the metal within a cloud. The damping time of the supernova shock wave in the adiabatic and isothermal era is several Myr and stars of the second generation (Pop II) are formed within a free fall time 20 Myr. Even if the gas cloud is metal poor, there is a lot of possibility to form the planets around such stars. The first planetary systems could be formed within 6 × 107 years after the Big Bang in the universe. Even in our galaxies, the first planetary systems could be formed within 1.7 × 108 years. If the abundance of heavy elements such as Fe is small compared to the elements of C, N, O, the planets must be the one where the rock fraction is small. It is interesting to wait the observations of planets around metal-poor stars. For the panspermia theory, the origin of life could be expected in such systems.