Sample records for gaussian random variable

  1. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  2. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  3. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  4. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  5. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  6. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  7. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  8. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less

  9. Distillation of squeezing from non-Gaussian quantum states.

    PubMed

    Heersink, J; Marquardt, Ch; Dong, R; Filip, R; Lorenz, S; Leuchs, G; Andersen, U L

    2006-06-30

    We show that single copy distillation of squeezing from continuous variable non-Gaussian states is possible using linear optics and conditional homodyne detection. A specific non-Gaussian noise source, corresponding to a random linear displacement, is investigated experimentally. Conditioning the signal on a tap measurement, we observe probabilistic recovery of squeezing.

  10. Detection of nonlinear transfer functions by the use of Gaussian statistics

    NASA Technical Reports Server (NTRS)

    Sheppard, J. G.

    1972-01-01

    The possibility of using on-line signal statistics to detect electronic equipment nonlinearities is discussed. The results of an investigation using Gaussian statistics are presented, and a nonlinearity test that uses ratios of the moments of a Gaussian random variable is developed and discussed. An outline for further investigation is presented.

  11. Worst case encoder-decoder policies for a communication system in the presence of an unknown probabilistic jammer

    NASA Astrophysics Data System (ADS)

    Cascio, David M.

    1988-05-01

    States of nature or observed data are often stochastically modelled as Gaussian random variables. At times it is desirable to transmit this information from a source to a destination with minimal distortion. Complicating this objective is the possible presence of an adversary attempting to disrupt this communication. In this report, solutions are provided to a class of minimax and maximin decision problems, which involve the transmission of a Gaussian random variable over a communications channel corrupted by both additive Gaussian noise and probabilistic jamming noise. The jamming noise is termed probabilistic in the sense that with nonzero probability 1-P, the jamming noise is prevented from corrupting the channel. We shall seek to obtain optimal linear encoder-decoder policies which minimize given quadratic distortion measures.

  12. Rightfulness of Summation Cut-Offs in the Albedo Problem with Gaussian Fluctuations of the Density of Scatterers

    NASA Astrophysics Data System (ADS)

    Selim, M. M.; Bezák, V.

    2003-06-01

    The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.

  13. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  14. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  15. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  16. Solute Concentration at a Pumping Well in Non-Gaussian Random Aquifers under Time-Varying Operational Schedules

    NASA Astrophysics Data System (ADS)

    Libera, A.; de Barros, F.; Riva, M.; Guadagnini, A.

    2016-12-01

    Managing contaminated groundwater systems is an arduous task for multiple reasons. First, subsurface hydraulic properties are heterogeneous and the high costs associated with site characterization leads to data scarcity (therefore, model predictions are uncertain). Second, it is common for water agencies to schedule groundwater extraction through a temporal sequence of pumping rates to maximize the benefits to anthropogenic activities and minimize the environmental footprint of the withdrawal operations. The temporal variability in pumping rates and aquifer heterogeneity affect dilution rates of contaminant plumes and chemical concentration breakthrough curves (BTCs) at the well. While contaminant transport under steady-state pumping is widely studied, the manner in which a given time-varying pumping schedule affects contaminant plume behavior is tackled only marginally. At the same time, most studies focus on the impact of Gaussian random hydraulic conductivity (K) fields on transport. Here, we systematically analyze the significance of the random space function (RSF) model characterizing K in the presence of distinct pumping operations on the uncertainty of the concentration BTC at the operating well. We juxtapose Monte Carlo based numerical results associated with two models: (a) a recently proposed Generalized Sub-Gaussian model which allows capturing non-Gaussian statistical scaling features of RSFs such as hydraulic conductivity, and (b) the commonly used Gaussian field approximation. Our novel results include an appraisal of the coupled effect of (a) the model employed to depict the random spatial variability of K and (b) transient flow regime, as induced by a temporally varying pumping schedule, on the concentration BTC at the operating well. We systematically quantify the sensitivity of the uncertainty in the contaminant BTC to the RSF model adopted for K (non-Gaussian or Gaussian) in the presence of diverse well pumping schedules. Results contribute to determine conditions under which any of these two key factors prevails on the other.

  17. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  18. Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.

    PubMed

    Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark

    2015-09-01

    "Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.

  19. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  20. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  1. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  2. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  3. Thermodynamical Limit for Correlated Gaussian Random Energy Models

    NASA Astrophysics Data System (ADS)

    Contucci, P.; Esposti, M. Degli; Giardinà, C.; Graffi, S.

    Let {EΣ(N)}ΣΣN be a family of |ΣN|=2N centered unit Gaussian random variables defined by the covariance matrix CN of elements cN(Σ,τ):=Av(EΣ(N)Eτ(N)) and the corresponding random Hamiltonian. Then the quenched thermodynamical limit exists if, for every decomposition N=N1+N2, and all pairs (Σ,τ)ΣN×ΣN: where πk(Σ),k=1,2 are the projections of ΣΣN into ΣNk. The condition is explicitly verified for the Sherrington-Kirkpatrick, the even p-spin, the Derrida REM and the Derrida-Gardner GREM models.

  4. Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1992-01-01

    In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.

  5. Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue

    NASA Astrophysics Data System (ADS)

    Kree, P.; Soize, C.

    The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.

  6. Spatio-temporal modelling of wind speed variations and extremes in the Caribbean and the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Rychlik, Igor; Mao, Wengang

    2018-02-01

    The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.

  7. A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales

    NASA Astrophysics Data System (ADS)

    Elliott, Frank W.; Majda, Andrew J.

    1995-03-01

    A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.

  8. Continuous-variable phase estimation with unitary and random linear disturbance

    NASA Astrophysics Data System (ADS)

    Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.

    2014-10-01

    We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.

  9. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  10. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  11. Universal quantum computation with temporal-mode bilayer square lattices

    NASA Astrophysics Data System (ADS)

    Alexander, Rafael N.; Yokoyama, Shota; Furusawa, Akira; Menicucci, Nicolas C.

    2018-03-01

    We propose an experimental design for universal continuous-variable quantum computation that incorporates recent innovations in linear-optics-based continuous-variable cluster state generation and cubic-phase gate teleportation. The first ingredient is a protocol for generating the bilayer-square-lattice cluster state (a universal resource state) with temporal modes of light. With this state, measurement-based implementation of Gaussian unitary gates requires only homodyne detection. Second, we describe a measurement device that implements an adaptive cubic-phase gate, up to a random phase-space displacement. It requires a two-step sequence of homodyne measurements and consumes a (non-Gaussian) cubic-phase state.

  12. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  13. Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1991-01-01

    The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.

  14. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  15. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  16. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  17. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less

  18. Random diffusivity from stochastic equations: comparison of two models for Brownian yet non-Gaussian diffusion

    NASA Astrophysics Data System (ADS)

    Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf

    2018-04-01

    A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.

  19. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  20. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  1. Gaussian random bridges and a geometric model for information equilibrium

    NASA Astrophysics Data System (ADS)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  2. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  3. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  4. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  5. Breaking Gaussian incompatibility on continuous variable quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi; Kiukas, Jukka, E-mail: jukka.kiukas@aber.ac.uk; Schultz, Jussi, E-mail: jussi.schultz@gmail.com

    2015-08-15

    We characterise Gaussian quantum channels that are Gaussian incompatibility breaking, that is, transform every set of Gaussian measurements into a set obtainable from a joint Gaussian observable via Gaussian postprocessing. Such channels represent local noise which renders measurements useless for Gaussian EPR-steering, providing the appropriate generalisation of entanglement breaking channels for this scenario. Understanding the structure of Gaussian incompatibility breaking channels contributes to the resource theory of noisy continuous variable quantum information protocols.

  6. The use of the multi-cumulant tensor analysis for the algorithmic optimisation of investment portfolios

    NASA Astrophysics Data System (ADS)

    Domino, Krzysztof

    2017-02-01

    The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.

  7. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  8. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  9. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  10. General immunity and superadditivity of two-way Gaussian quantum cryptography.

    PubMed

    Ottaviani, Carlo; Pirandola, Stefano

    2016-03-01

    We consider two-way continuous-variable quantum key distribution, studying its security against general eavesdropping strategies. Assuming the asymptotic limit of many signals exchanged, we prove that two-way Gaussian protocols are immune to coherent attacks. More precisely we show the general superadditivity of the two-way security thresholds, which are proven to be higher than the corresponding one-way counterparts in all cases. We perform the security analysis first reducing the general eavesdropping to a two-mode coherent Gaussian attack, and then showing that the superadditivity is achieved by exploiting the random on/off switching of the two-way quantum communication. This allows the parties to choose the appropriate communication instances to prepare the key, accordingly to the tomography of the quantum channel. The random opening and closing of the circuit represents, in fact, an additional degree of freedom allowing the parties to convert, a posteriori, the two-mode correlations of the eavesdropping into noise. The eavesdropper is assumed to have no access to the on/off switching and, indeed, cannot adapt her attack. We explicitly prove that this mechanism enhances the security performance, no matter if the eavesdropper performs collective or coherent attacks.

  11. General immunity and superadditivity of two-way Gaussian quantum cryptography

    PubMed Central

    Ottaviani, Carlo; Pirandola, Stefano

    2016-01-01

    We consider two-way continuous-variable quantum key distribution, studying its security against general eavesdropping strategies. Assuming the asymptotic limit of many signals exchanged, we prove that two-way Gaussian protocols are immune to coherent attacks. More precisely we show the general superadditivity of the two-way security thresholds, which are proven to be higher than the corresponding one-way counterparts in all cases. We perform the security analysis first reducing the general eavesdropping to a two-mode coherent Gaussian attack, and then showing that the superadditivity is achieved by exploiting the random on/off switching of the two-way quantum communication. This allows the parties to choose the appropriate communication instances to prepare the key, accordingly to the tomography of the quantum channel. The random opening and closing of the circuit represents, in fact, an additional degree of freedom allowing the parties to convert, a posteriori, the two-mode correlations of the eavesdropping into noise. The eavesdropper is assumed to have no access to the on/off switching and, indeed, cannot adapt her attack. We explicitly prove that this mechanism enhances the security performance, no matter if the eavesdropper performs collective or coherent attacks. PMID:26928053

  12. Poly-Gaussian model of randomly rough surface in rarefied gas flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksenova, Olga A.; Khalidov, Iskander A.

    2014-12-09

    Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less

  13. Super-resolving random-Gaussian apodized photon sieve.

    PubMed

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  14. Quantum key distribution using basis encoding of Gaussian-modulated coherent states

    NASA Astrophysics Data System (ADS)

    Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua

    2018-04-01

    The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.

  15. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  16. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  17. Probabilistic solutions of nonlinear oscillators excited by combined colored and white noise excitations

    NASA Astrophysics Data System (ADS)

    Siu-Siu, Guo; Qingxuan, Shi

    2017-03-01

    In this paper, single-degree-of-freedom (SDOF) systems combined to Gaussian white noise and Gaussian/non-Gaussian colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations with four-coupled first-order differential equations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions, especially the ones in the tail regions of the PDFs. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis. Hopefully, our present work could provide insights into the investigation of structures under random loadings.

  18. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    NASA Astrophysics Data System (ADS)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  19. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  20. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  1. Analysis of randomly time varying systems by gaussian closure technique

    NASA Astrophysics Data System (ADS)

    Dash, P. K.; Iyengar, R. N.

    1982-07-01

    The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.

  2. Optimality of Gaussian attacks in continuous-variable quantum cryptography.

    PubMed

    Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio

    2006-11-10

    We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.

  3. Bayesian Lagrangian Data Assimilation and Drifter Deployment Strategies

    NASA Astrophysics Data System (ADS)

    Dutt, A.; Lermusiaux, P. F. J.

    2017-12-01

    Ocean currents transport a variety of natural (e.g. water masses, phytoplankton, zooplankton, sediments, etc.) and man-made materials and other objects (e.g. pollutants, floating debris, search and rescue, etc.). Lagrangian Coherent Structures (LCSs) or the most influential/persistent material lines in a flow, provide a robust approach to characterize such Lagrangian transports and organize classic trajectories. Using the flow-map stochastic advection and a dynamically-orthogonal decomposition, we develop uncertainty prediction schemes for both Eulerian and Lagrangian variables. We then extend our Bayesian Gaussian Mixture Model (GMM)-DO filter to a joint Eulerian-Lagrangian Bayesian data assimilation scheme. The resulting nonlinear filter allows the simultaneous non-Gaussian estimation of Eulerian variables (e.g. velocity, temperature, salinity, etc.) and Lagrangian variables (e.g. drifter/float positions, trajectories, LCSs, etc.). Its results are showcased using a double-gyre flow with a random frequency, a stochastic flow past a cylinder, and realistic ocean examples. We further show how our Bayesian mutual information and adaptive sampling equations provide a rigorous efficient methodology to plan optimal drifter deployment strategies and predict the optimal times, locations, and types of measurements to be collected.

  4. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  5. Investigating Einstein-Podolsky-Rosen steering of continuous-variable bipartite states by non-Gaussian pseudospin measurements

    NASA Astrophysics Data System (ADS)

    Xiang, Yu; Xu, Buqing; Mišta, Ladislav; Tufarelli, Tommaso; He, Qiongyi; Adesso, Gerardo

    2017-10-01

    Einstein-Podolsky-Rosen (EPR) steering is an asymmetric form of correlations which is intermediate between quantum entanglement and Bell nonlocality, and can be exploited as a resource for quantum communication with one untrusted party. In particular, steering of continuous-variable Gaussian states has been extensively studied theoretically and experimentally, as a fundamental manifestation of the EPR paradox. While most of these studies focused on quadrature measurements for steering detection, two recent works revealed that there exist Gaussian states which are only steerable by suitable non-Gaussian measurements. In this paper we perform a systematic investigation of EPR steering of bipartite Gaussian states by pseudospin measurements, complementing and extending previous findings. We first derive the density-matrix elements of two-mode squeezed thermal Gaussian states in the Fock basis, which may be of independent interest. We then use such a representation to investigate steering of these states as detected by a simple nonlinear criterion, based on second moments of the correlation matrix constructed from pseudospin operators. This analysis reveals previously unexplored regimes where non-Gaussian measurements are shown to be more effective than Gaussian ones to witness steering of Gaussian states in the presence of local noise. We further consider an alternative set of pseudospin observables, whose expectation value can be expressed more compactly in terms of Wigner functions for all two-mode Gaussian states. However, according to the adopted criterion, these observables are found to be always less sensitive than conventional Gaussian observables for steering detection. Finally, we investigate continuous-variable Werner states, which are non-Gaussian mixtures of Gaussian states, and find that pseudospin measurements are always more effective than Gaussian ones to reveal their steerability. Our results provide useful insights on the role of non-Gaussian measurements in characterizing quantum correlations of Gaussian and non-Gaussian states of continuous-variable quantum systems.

  6. MAI statistics estimation and analysis in a DS-CDMA system

    NASA Astrophysics Data System (ADS)

    Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.

    2018-05-01

    A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.

  7. Lognormal Assimilation of Water Vapor in a WRF-GSI Cycled System

    NASA Astrophysics Data System (ADS)

    Fletcher, S. J.; Kliewer, A.; Jones, A. S.; Forsythe, J. M.

    2015-12-01

    Recent publications have shown the viability of both detecting a lognormally-distributed signal for water vapor mixing ratio and the improved quality of satellite retrievals in a 1DVAR mixed lognormal-Gaussian assimilation scheme over a Gaussian-only system. This mixed scheme is incorporated into the Gridpoint Statistical Interpolation (GSI) assimilation scheme with the goal of improving forecasts from the Weather Research and Forecasting (WRF) Model in a cycled system. Results are presented of the impact of treating water vapor as a lognormal random variable. Included in the analysis are: 1) the evolution of Tropical Storm Chris from 2006, and 2) an analysis of a "Pineapple Express" water vapor event from 2005 where a lognormal signal has been previously detected.

  8. The Significance of an Excess in a Counting Experiment: Assessing the Impact of Systematic Uncertainties and the Case with a Gaussian Background

    NASA Astrophysics Data System (ADS)

    Vianello, Giacomo

    2018-05-01

    Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (“on” measurement) is contrasted with a background-only observation free of the effect (“off” measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables. In this paper we study three other cases: (i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, (ii) the case where the background estimate b in the off measurement has an additional systematic uncertainty, and (iii) the case where b is a Gaussian random variable instead of a Poisson random variable. The latter case applies when b comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use a formula that is only valid when b is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short gamma-ray bursts and of new X-ray or γ-ray sources. All the techniques presented in this paper are made available in a Python code that is ready to use.

  9. Joint simulation of stationary grade and non-stationary rock type for quantifying geological uncertainty in a copper deposit

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad; Emery, Xavier

    2017-12-01

    In mineral resources evaluation, the joint simulation of a quantitative variable, such as a metal grade, and a categorical variable, such as a rock type, is challenging when one wants to reproduce spatial trends of the rock type domains, a feature that makes a stationarity assumption questionable. To address this problem, this work presents methodological and practical proposals for jointly simulating a grade and a rock type, when the former is represented by the transform of a stationary Gaussian random field and the latter is obtained by truncating an intrinsic random field of order k with Gaussian generalized increments. The proposals concern both the inference of the model parameters and the construction of realizations conditioned to existing data. The main difficulty is the identification of the spatial correlation structure, for which a semi-automated algorithm is designed, based on a least squares fitting of the data-to-data indicator covariances and grade-indicator cross-covariances. The proposed models and algorithms are applied to jointly simulate the copper grade and the rock type in a Chilean porphyry copper deposit. The results show their ability to reproduce the gradual transitions of the grade when crossing a rock type boundary, as well as the spatial zonation of the rock type.

  10. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures.

    PubMed

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-12-18

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices.

  11. Central Limit Theorems for Linear Statistics of Heavy Tailed Random Matrices

    NASA Astrophysics Data System (ADS)

    Benaych-Georges, Florent; Guionnet, Alice; Male, Camille

    2014-07-01

    We show central limit theorems (CLT) for the linear statistics of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of α-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.

  12. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  13. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075

  14. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation.

    PubMed

    Müller, Eike H; Scheichl, Rob; Shardlow, Tony

    2015-04-08

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  15. Assessment of DPOAE test-retest difference curves via hierarchical Gaussian processes.

    PubMed

    Bao, Junshu; Hanson, Timothy; McMillan, Garnett P; Knight, Kristin

    2017-03-01

    Distortion product otoacoustic emissions (DPOAE) testing is a promising alternative to behavioral hearing tests and auditory brainstem response testing of pediatric cancer patients. The central goal of this study is to assess whether significant changes in the DPOAE frequency/emissions curve (DP-gram) occur in pediatric patients in a test-retest scenario. This is accomplished through the construction of normal reference charts, or credible regions, that DP-gram differences lie in, as well as contour probabilities that measure how abnormal (or in a certain sense rare) a test-retest difference is. A challenge is that the data were collected over varying frequencies, at different time points from baseline, and on possibly one or both ears. A hierarchical structural equation Gaussian process model is proposed to handle the different sources of correlation in the emissions measurements, wherein both subject-specific random effects and variance components governing the smoothness and variability of each child's Gaussian process are coupled together. © 2016, The International Biometric Society.

  16. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  18. Simple proof that Gaussian attacks are optimal among collective attacks against continuous-variable quantum key distribution with a Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex

    2010-06-15

    In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.

  19. Blind Deconvolution Method of Image Deblurring Using Convergence of Variance

    DTIC Science & Technology

    2011-03-24

    random variable x is [9] fX (x) = 1√ 2πσ e−(x−m) 2/2σ2 −∞ < x <∞, σ > 0 (6) where m is the mean and σ is the variance. 7 Figure 1: Gaussian distribution...of the MAP Estimation algorithm when N was set to 50. The APEX method is not without its own difficulties when dealing with astro - nomical data

  20. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in Shortwave Radiative Transfer: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.

    2001-12-01

    Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.

  1. Rupture Propagation for Stochastic Fault Models

    NASA Astrophysics Data System (ADS)

    Favreau, P.; Lavallee, D.; Archuleta, R.

    2003-12-01

    The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.

  2. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    PubMed

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2016-01-01

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  3. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  4. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures

    PubMed Central

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-01-01

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices. PMID:26679513

  5. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  6. Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.

    2018-07-01

    Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.

  7. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  8. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  9. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-05-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  10. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-06-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  11. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  12. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  13. Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2006-03-01

    We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequality constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.

  14. Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio

    NASA Astrophysics Data System (ADS)

    Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.

    2017-12-01

    Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.

  15. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  16. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  17. Integration of quantum key distribution and private classical communication through continuous variable

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  18. Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Centre for Quantum Computation, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA; Serafini, Alessio

    2006-03-15

    We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequalitymore » constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.« less

  19. Random wandering of laser beams with orbital angular momentum during propagation through atmospheric turbulence.

    PubMed

    Aksenov, Valerii P; Kolosov, Valeriy V; Pogutsa, Cheslav E

    2014-06-10

    The propagation of laser beams having orbital angular momenta (OAM) in the turbulent atmosphere is studied numerically. The variance of random wandering of these beams is investigated with the use of the Monte Carlo technique. It is found that, among various types of vortex laser beams, such as the Laguerre-Gaussian (LG) beam, modified Bessel-Gaussian beam, and hypergeometric Gaussian beam, having identical initial effective radii and OAM, the LG beam occupying the largest effective volume in space is the most stable one.

  20. Operational quantification of continuous-variable correlations.

    PubMed

    Rodó, Carles; Adesso, Gerardo; Sanpera, Anna

    2008-03-21

    We quantify correlations (quantum and/or classical) between two continuous-variable modes as the maximal number of correlated bits extracted via local quadrature measurements. On Gaussian states, such "bit quadrature correlations" majorize entanglement, reducing to an entanglement monotone for pure states. For non-Gaussian states, such as photonic Bell states, photon-subtracted states, and mixtures of Gaussian states, the bit correlations are shown to be a monotonic function of the negativity. This quantification yields a feasible, operational way to measure non-Gaussian entanglement in current experiments by means of direct homodyne detection, without a complete state tomography.

  1. Quantum Teamwork for Unconditional Multiparty Communication with Gaussian States

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Adesso, Gerardo; Xie, Changde; Peng, Kunchi

    2009-08-01

    We demonstrate the capability of continuous variable Gaussian states to communicate multipartite quantum information. A quantum teamwork protocol is presented according to which an arbitrary possibly entangled multimode state can be faithfully teleported between two teams each comprising many cooperative users. We prove that N-mode Gaussian weighted graph states exist for arbitrary N that enable unconditional quantum teamwork implementations for any arrangement of the teams. These perfect continuous variable maximally multipartite entangled resources are typical among pure Gaussian states and are unaffected by the entanglement frustration occurring in multiqubit states.

  2. On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.

    2011-01-01

    This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.

  3. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  4. Transport of Charged Particles in Turbulent Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.

    2017-12-01

    Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are discussed.

  5. Atomoxetine could improve intra-individual variability in drug-naïve adults with attention-deficit/hyperactivity disorder comparably with methylphenidate: A head-to-head randomized clinical trial.

    PubMed

    Ni, Hsing-Chang; Hwang Gu, Shoou-Lian; Lin, Hsiang-Yuan; Lin, Yu-Ju; Yang, Li-Kuang; Huang, Hui-Chun; Gau, Susan Shur-Fen

    2016-05-01

    Intra-individual variability in reaction time (IIV-RT) is common in individuals with attention-deficit/hyperactivity disorder (ADHD). It can be improved by stimulants. However, the effects of atomoxetine on IIV-RT are inconclusive. We aimed to investigate the effects of atomoxetine on IIV-RT, and directly compared its efficacy with methylphenidate in adults with ADHD. An 8-10 week, open-label, head-to-head, randomized clinical trial was conducted in 52 drug-naïve adults with ADHD, who were randomly assigned to two treatment groups: immediate-release methylphenidate (n=26) thrice daily (10-20 mg per dose) and atomoxetine once daily (n=26) (0.5-1.2 mg/kg/day). IIV-RT, derived from the Conners' continuous performance test (CCPT), was represented by the Gaussian (reaction time standard error, RTSE) and ex-Gaussian models (sigma and tau). Other neuropsychological functions, including response errors and mean of reaction time, were also measured. Participants received CCPT assessments at baseline and week 8-10 (60.4±6.3 days). We found comparable improvements in performances of CCPT between the immediate-release methylphenidate- and atomoxetine-treated groups. Both medications significantly improved IIV-RT in terms of reducing tau values with comparable efficacy. In addition, both medications significantly improved inhibitory control by reducing commission errors. Our results provide evidence to support that atomoxetine could improve IIV-RT and inhibitory control, of comparable efficacy with immediate-release methylphenidate, in drug-naïve adults with ADHD. Shared and unique mechanisms underpinning these medication effects on IIV-RT awaits further investigation. © The Author(s) 2016.

  6. Non-Gaussian Methods for Causal Structure Learning.

    PubMed

    Shimizu, Shohei

    2018-05-22

    Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.

  7. Coexistence of unlimited bipartite and genuine multipartite entanglement: Promiscuous quantum correlations arising from discrete to continuous-variable systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; CNR-INFM Coherentia , Naples; Grup d'Informacio Quantica, Universitat Autonoma de Barcelona, E-08193 Bellaterra

    2007-08-15

    Quantum mechanics imposes 'monogamy' constraints on the sharing of entanglement. We show that, despite these limitations, entanglement can be fully 'promiscuous', i.e., simultaneously present in unlimited two-body and many-body forms in states living in an infinite-dimensional Hilbert space. Monogamy just bounds the divergence rate of the various entanglement contributions. This is demonstrated in simple families of N-mode (N{>=}4) Gaussian states of light fields or atomic ensembles, which therefore enable infinitely more freedom in the distribution of information, as opposed to systems of individual qubits. Such a finding is of importance for the quantification, understanding, and potential exploitation of shared quantummore » correlations in continuous variable systems. We discuss how promiscuity gradually arises when considering simple families of discrete variable states, with increasing Hilbert space dimension towards the continuous variable limit. Such models are somehow analogous to Gaussian states with asymptotically diverging, but finite, squeezing. In this respect, we find that non-Gaussian states (which in general are more entangled than Gaussian states) exhibit also the interesting feature that their entanglement is more shareable: in the non-Gaussian multipartite arena, unlimited promiscuity can be already achieved among three entangled parties, while this is impossible for Gaussian, even infinitely squeezed states.« less

  8. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  9. Stochastic Modeling of CO2 Migrations and Chemical Reactions in Deep Saline Formations

    NASA Astrophysics Data System (ADS)

    Ni, C.; Lee, I.; Lin, C.

    2013-12-01

    Carbon capture and storage (CCS) has been recognized the feasible technology that can significant reduce the anthropogenic CO2 emissions from large point sources. The CO2 injection in geological formations is one of the options to permanently store the captured CO2. Based on this concept a large number of target formations have been identified and intensively investigated with different types of techniques such as the hydrogeophysical experiments or numerical simulations. The numerical simulations of CO2 migrations in saline formations recently gather much attention because a number of models are available for this purpose and there are potential sites existing in many countries. The lower part of Cholan Formation (CF) near Changhua Coastal Industrial Park (CCIP) in west central Taiwan was identified the largest potential site for CO2 sequestration. The top elevations of the KF in this area varies from 1300 to 1700m below the sea level. Laboratory experiment showed that the permeability of CF is 10-14 to 10-12 m2. Over the years the offshore seismic survey and limited onshore borehole logs have provided information for the simulation of CO2 migration in the CF although the original investigations might not focus on the purpose of CO2 sequestration. In this study we modify the TOUGHREACT model to consider the small-scale heterogeneity in target formation and the cap rock of upper CF. A Monte Carlo Simulation (MCS) approach based on the TOUGHREACT model is employed to quantify the effect of small-scale heterogeneity on the CO2 migrations and hydrochemical reactions in the CF. We assume that the small-scale variability of permeability in KF can be described with a known Gaussian distribution. Therefore, the Gaussian type random field generator such as Sequential Gaussian Simulation (SGSIM) in Geostatistical Software Library (GSLIB) can be used to provide the random permeability realizations for the MCS. A variety of statistical parameters such as the variances and correlation lengths in a Gaussian covariance model are varied in the MCS and the uncertainty of the CO2 and other chemical concentrations are evaluated based on 144 random realizations. In this study a constant injection rate of100Mt/year supercritical CO2 is applied in the bottom of CF. The continuous injection time is 20 years and the uncertainty results are evaluated at 100 years. By comparing with the case without small-scale variability simulation results show that the CO2 plume sizes in the horizontal direction increase from tens of meters to hundreds of meters when the variances of small-scale variability are varied from 1.0 to 4.0. The changes of correlation lengths (i.e., from 100m, 200m, to 400m) show small contribution on the size increases of CO2 plumes. Other uncertainties of chemical concentrations show behaviors similar to the CO2 plume patterns.

  10. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  11. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  12. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    NASA Astrophysics Data System (ADS)

    Tsilifis, Panagiotis; Ghanem, Roger G.

    2017-07-01

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  13. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  14. Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data

    USGS Publications Warehouse

    Wikle, C.K.; Royle, J. Andrew

    2005-01-01

    Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.

  15. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  16. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  17. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  18. Testing for a Signal with Unknown Location and Scale in a Stationary Gaussian Random Field

    DTIC Science & Technology

    1994-01-07

    Secondary 60D05, 52A22. Key words and phrases. Euler characteristic, integral geometry, image analysis , Gaussian fields, volume of tubes. SUMMARY We...words and phrases. Euler characteristic, integral geometry. image analysis . Gaussian fields. volume of tubes. 20. AMST RACT (Coith..o an revmreo ef* It

  19. Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-10-07

    We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.

  20. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  1. Modeling and statistical analysis of non-Gaussian random fields with heavy-tailed distributions.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi; Nakhlband, Abbas

    2017-04-01

    In this paper, we investigate and develop an alternative approach to the numerical analysis and characterization of random fluctuations with the heavy-tailed probability distribution function (PDF), such as turbulent heat flow and solar flare fluctuations. We identify the heavy-tailed random fluctuations based on the scaling properties of the tail exponent of the PDF, power-law growth of qth order correlation function, and the self-similar properties of the contour lines in two-dimensional random fields. Moreover, this work leads to a substitution for the fractional Edwards-Wilkinson (EW) equation that works in the presence of μ-stable Lévy noise. Our proposed model explains the configuration dynamics of the systems with heavy-tailed correlated random fluctuations. We also present an alternative solution to the fractional EW equation in the presence of μ-stable Lévy noise in the steady state, which is implemented numerically, using the μ-stable fractional Lévy motion. Based on the analysis of the self-similar properties of contour loops, we numerically show that the scaling properties of contour loop ensembles can qualitatively and quantitatively distinguish non-Gaussian random fields from Gaussian random fluctuations.

  2. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    NASA Astrophysics Data System (ADS)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  3. Closer look at time averages of the logistic map at the edge of chaos

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian

    2009-05-01

    The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.

  4. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    PubMed

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  5. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  6. Weakly anomalous diffusion with non-Gaussian propagators

    NASA Astrophysics Data System (ADS)

    Cressoni, J. C.; Viswanathan, G. M.; Ferreira, A. S.; da Silva, M. A. A.

    2012-08-01

    A poorly understood phenomenon seen in complex systems is diffusion characterized by Hurst exponent H≈1/2 but with non-Gaussian statistics. Motivated by such empirical findings, we report an exact analytical solution for a non-Markovian random walk model that gives rise to weakly anomalous diffusion with H=1/2 but with a non-Gaussian propagator.

  7. Response measurement by laser Doppler vibrometry in vibration qualification tests with non-Gaussian random excitation

    NASA Astrophysics Data System (ADS)

    Troncossi, M.; Di Sante, R.; Rivola, A.

    2016-10-01

    In the field of vibration qualification testing, random excitations are typically imposed on the tested system in terms of a power spectral density (PSD) profile. This is the one of the most popular ways to control the shaker or slip table for durability tests. However, these excitations (and the corresponding system responses) exhibit a Gaussian probability distribution, whereas not all real-life excitations are Gaussian, causing the response to be also non-Gaussian. In order to introduce non-Gaussian peaks, a further parameter, i.e., kurtosis, has to be controlled in addition to the PSD. However, depending on the specimen behaviour and input signal characteristics, the use of non-Gaussian excitations with high kurtosis and a given PSD does not automatically imply a non-Gaussian stress response. For an experimental investigation of these coupled features, suitable measurement methods need to be developed in order to estimate the stress amplitude response at critical failure locations and consequently evaluate the input signals most representative for real-life, non-Gaussian excitations. In this paper, a simple test rig with a notched cantilevered specimen was developed to measure the response and examine the kurtosis values in the case of stationary Gaussian, stationary non-Gaussian, and burst non-Gaussian excitation signals. The laser Doppler vibrometry technique was used in this type of test for the first time, in order to estimate the specimen stress amplitude response as proportional to the differential displacement measured at the notch section ends. A method based on the use of measurements using accelerometers to correct for the occasional signal dropouts occurring during the experiment is described. The results demonstrate the ability of the test procedure to evaluate the output signal features and therefore to select the most appropriate input signal for the fatigue test.

  8. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in SW-RT: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.

    2001-12-01

    2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.

  9. Practical limitation for continuous-variable quantum cryptography using coherent States.

    PubMed

    Namiki, Ryo; Hirano, Takuya

    2004-03-19

    In this Letter, first, we investigate the security of a continuous-variable quantum cryptographic scheme with a postselection process against individual beam splitting attack. It is shown that the scheme can be secure in the presence of the transmission loss owing to the postselection. Second, we provide a loss limit for continuous-variable quantum cryptography using coherent states taking into account excess Gaussian noise on quadrature distribution. Since the excess noise is reduced by the loss mechanism, a realistic intercept-resend attack which makes a Gaussian mixture of coherent states gives a loss limit in the presence of any excess Gaussian noise.

  10. The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.

    PubMed

    de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J

    2017-05-16

    Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.

  11. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  12. Continuous-variable quantum cryptography is secure against non-Gaussian attacks.

    PubMed

    Grosshans, Frédéric; Cerf, Nicolas J

    2004-01-30

    A general study of arbitrary finite-size coherent attacks against continuous-variable quantum cryptographic schemes is presented. It is shown that, if the size of the blocks that can be coherently attacked by an eavesdropper is fixed and much smaller than the key size, then the optimal attack for a given signal-to-noise ratio in the transmission line is an individual Gaussian attack. Consequently, non-Gaussian coherent attacks do not need to be considered in the security analysis of such quantum cryptosystems.

  13. Statistics of Stokes variables for correlated Gaussian fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliyahu, D.

    1994-09-01

    The joint and marginal probability distribution functions of the Stokes variables are derived for correlated Gaussian fields [an extension of D. Eliyahu, Phys. Rev. E 47, 2881 (1993)]. The statistics depend only on the first moment (averaged) Stokes variables and have a universal form for [ital S][sub 1], [ital S][sub 2], and [ital S][sub 3]. The statistics of the variables describing the Cartesian coordinates of the Poincare sphere are given also.

  14. Non-Gaussian operations on bosonic modes of light: Photon-added Gaussian channels

    NASA Astrophysics Data System (ADS)

    Sabapathy, Krishna Kumar; Winter, Andreas

    2017-06-01

    We present a framework for studying bosonic non-Gaussian channels of continuous-variable systems. Our emphasis is on a class of channels that we call photon-added Gaussian channels, which are experimentally viable with current quantum-optical technologies. A strong motivation for considering these channels is the fact that it is compulsory to go beyond the Gaussian domain for numerous tasks in continuous-variable quantum information processing such as entanglement distillation from Gaussian states and universal quantum computation. The single-mode photon-added channels we consider are obtained by using two-mode beam splitters and squeezing operators with photon addition applied to the ancilla ports giving rise to families of non-Gaussian channels. For each such channel, we derive its operator-sum representation, indispensable in the present context. We observe that these channels are Fock preserving (coherence nongenerating). We then report two examples of activation using our scheme of photon addition, that of quantum-optical nonclassicality at outputs of channels that would otherwise output only classical states and of both the quantum and private communication capacities, hinting at far-reaching applications for quantum-optical communication. Further, we see that noisy Gaussian channels can be expressed as a convex mixture of these non-Gaussian channels. We also present other physical and information-theoretic properties of these channels.

  15. Continuous-variable quantum teleportation with non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, F.; Dipartimento di Fisica, Universita degli Studi di Salerno, Via S. Allende, I-84081 Baronissi; CNR-INFM Coherentia, Napoli, Italy and CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi

    2007-08-15

    We investigate continuous variable quantum teleportation using non-Gaussian states of the radiation field as entangled resources. We compare the performance of different classes of degaussified resources, including two-mode photon-added and two-mode photon-subtracted squeezed states. We then introduce a class of two-mode squeezed Bell-like states with one-parameter dependence for optimization. These states interpolate between and include as subcases different classes of degaussified resources. We show that optimized squeezed Bell-like resources yield a remarkable improvement in the fidelity of teleportation both for coherent and nonclassical input states. The investigation reveals that the optimal non-Gaussian resources for continuous variable teleportation are those thatmore » most closely realize the simultaneous maximization of the content of entanglement, the degree of affinity with the two-mode squeezed vacuum, and the, suitably measured, amount of non-Gaussianity.« less

  16. Signal and noise extraction from analog memory elements for neuromorphic computing.

    PubMed

    Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T

    2018-05-29

    Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.

  17. Quantum correlations for bipartite continuous-variable systems

    NASA Astrophysics Data System (ADS)

    Ma, Ruifen; Hou, Jinchuan; Qi, Xiaofei; Wang, Yangyang

    2018-04-01

    Two quantum correlations Q and Q_P for (m+n)-mode continuous-variable systems are introduced in terms of average distance between the reduced states under the local Gaussian positive operator-valued measurements, and analytical formulas of these quantum correlations for bipartite Gaussian states are provided. It is shown that the product states do not contain these quantum correlations, and conversely, all (m+n)-mode Gaussian states with zero quantum correlations are product states. Generally, Q≥ Q_{P}, but for the symmetric two-mode squeezed thermal states, these quantum correlations are the same and a computable formula is given. In addition, Q is compared with Gaussian geometric discord for symmetric squeezed thermal states.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less

  19. Mechanical properties of 3D printed warped membranes

    NASA Astrophysics Data System (ADS)

    Kosmrlj, Andrej; Xiao, Kechao; Weaver, James C.; Vlassak, Joost J.; Nelson, David R.

    2015-03-01

    We explore how a frozen background metric affects the mechanical properties of solid planar membranes. Our focus is a special class of ``warped membranes'' with a preferred random height profile characterized by random Gaussian variables h (q) in Fourier space with zero mean and variance < | h (q) | 2 > q-m . It has been shown theoretically that in the linear response regime, this quenched random disorder increases the effective bending rigidity, while the Young's and shear moduli are reduced. Compared to flat plates of the same thickness t, the bending rigidity of warped membranes is increased by a factor hv / t while the in-plane elastic moduli are reduced by t /hv , where hv =√{< | h (x) | 2 > } describes the frozen height fluctuations. Interestingly, hv is system size dependent for warped membranes characterized with m > 2 . We present experimental tests of these predictions, using warped membranes prepared via high resolution 3D printing.

  20. Random scalar fields and hyperuniformity

    NASA Astrophysics Data System (ADS)

    Ma, Zheng; Torquato, Salvatore

    2017-06-01

    Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.

  1. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  2. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  3. Virial expansion for almost diagonal random matrices

    NASA Astrophysics Data System (ADS)

    Yevtushenko, Oleg; Kravtsov, Vladimir E.

    2003-08-01

    Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\

  4. Gaussian Process Regression for Uncertainty Estimation on Ecosystem Data

    NASA Astrophysics Data System (ADS)

    Menzer, O.; Moffat, A.; Lasslop, G.; Reichstein, M.

    2011-12-01

    The flow of carbon between terrestrial ecosystems and the atmosphere is mainly driven by nonlinear, complex and time-lagged processes. Understanding the associated ecosystem responses and climatic feedbacks is a key challenge regarding climate change questions such as increasing atmospheric CO2 levels. Usually, the underlying relationships are implemented in models as prescribed functions which interlink numerous meteorological, radiative and gas exchange variables. In contrast, supervised Machine Learning algorithms, such as Artificial Neural Networks or Gaussian Processes, allow for an insight into the relationships directly from a data perspective. Micrometeorological, high resolution measurements at flux towers of the FLUXNET observational network are an essential tool for obtaining quantifications of the ecosystem variables, as they continuously record e.g. CO2 exchange, solar radiation and air temperature. In order to facilitate the investigation of the interactions and feedbacks between these variables, several challenging data properties need to be taken into account: noisy, multidimensional and incomplete (Moffat, Accepted). The task of estimating uncertainties in such micrometeorological measurements can be addressed by Gaussian Processes (GPs), a modern nonparametric method for nonlinear regression. The GP approach has recently been shown to be a powerful modeling tool, regardless of the input dimensionality, the degree of nonlinearity and the noise level (Rasmussen and Williams, 2006). Heteroscedastic Gaussian Processes (HGPs) are a specialized GP method for data with a varying, inhomogeneous noise variance (Goldberg et al., 1998; Kersting et al., 2007), as usually observed in CO2 flux measurements (Richardson et al., 2006). Here, we showed by an evaluation of the HGP performance in several artificial experiments and a comparison to existing nonlinear regression methods, that their outstanding ability is to capture measurement noise levels, concurrently providing reasonable data fits under relatively few assumptions. On the basis of incomplete, half-hourly measured ecosystem data, a HGP was trained to model NEP (Net Ecosystem Production), only with the drivers PPFD (Photosynthetic Photon Flux Density) and Air Temperature. Time information was added to account for the autocorrelation in the flux measurements. Provided with a gap-filled, meteorological time series, NEP and the corresponding random error estimates can then be predicted empirically at high temporal resolution. We report uncertainties in annual sums of CO2 exchange at two flux tower sites in Hainich, Germany and Hesse, France. Similar noise patterns, but different magnitudes between sites were detected, with annual random error estimates of +/- 14.1 gCm^-2yr^-1 and +/- 23.5 gCm^-2yr^-1, respectively, for the year 2001. Existing models calculate uncertainties by evaluating the standard deviation of the model residuals. A comparison to the methods of Reichstein et al. (2005) and Lasslop et al. (2008) showed confidence both in the predictive uncertainties and the annual sums modeled with the HGP approach.

  5. Entropy of level-cut random Gaussian structures at different volume fractions

    NASA Astrophysics Data System (ADS)

    Marčelja, Stjepan

    2017-10-01

    Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.

  6. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    NASA Astrophysics Data System (ADS)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  7. Connections between Graphical Gaussian Models and Factor Analysis

    ERIC Educational Resources Information Center

    Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.

    2010-01-01

    Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…

  8. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    NASA Astrophysics Data System (ADS)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.

  9. Robustifying blind image deblurring methods by simple filters

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing

    2016-07-01

    The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.

  10. Pedagogical introduction to the entropy of entanglement for Gaussian states

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.

    2018-05-01

    In quantum information theory, the entropy of entanglement is a standard measure of bipartite entanglement between two partitions of a composite system. For a particular class of continuous variable quantum states, the Gaussian states, the entropy of entanglement can be expressed elegantly in terms of symplectic eigenvalues, elements that characterise a Gaussian state and depend on the correlations of the canonical variables. We give a rigorous step-by-step derivation of this result and provide physical insights, together with an example that can be useful in practice for calculations.

  11. Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.

    PubMed

    García-Patrón, Raúl; Cerf, Nicolas J

    2006-11-10

    A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].

  12. Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex

    2011-04-15

    In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.

  13. Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States.

    PubMed

    Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi

    2017-06-09

    Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.

  14. Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States

    NASA Astrophysics Data System (ADS)

    Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi

    2017-06-01

    Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.

  15. Resource theory of non-Gaussian operations

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.

    2018-05-01

    Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.

  16. Quantum entanglement beyond Gaussian criteria

    PubMed Central

    Gomes, R. M.; Salles, A.; Toscano, F.; Souto Ribeiro, P. H.; Walborn, S. P.

    2009-01-01

    Most of the attention given to continuous variable systems for quantum information processing has traditionally been focused on Gaussian states. However, non-Gaussianity is an essential requirement for universal quantum computation and entanglement distillation, and can improve the efficiency of other quantum information tasks. Here we report the experimental observation of genuine non-Gaussian entanglement using spatially entangled photon pairs. The quantum correlations are invisible to all second-order tests, which identify only Gaussian entanglement, and are revealed only under application of a higher-order entanglement criterion. Thus, the photons exhibit a variety of entanglement that cannot be reproduced by Gaussian states. PMID:19995963

  17. Quantum entanglement beyond Gaussian criteria.

    PubMed

    Gomes, R M; Salles, A; Toscano, F; Souto Ribeiro, P H; Walborn, S P

    2009-12-22

    Most of the attention given to continuous variable systems for quantum information processing has traditionally been focused on Gaussian states. However, non-Gaussianity is an essential requirement for universal quantum computation and entanglement distillation, and can improve the efficiency of other quantum information tasks. Here we report the experimental observation of genuine non-Gaussian entanglement using spatially entangled photon pairs. The quantum correlations are invisible to all second-order tests, which identify only Gaussian entanglement, and are revealed only under application of a higher-order entanglement criterion. Thus, the photons exhibit a variety of entanglement that cannot be reproduced by Gaussian states.

  18. Local Gaussian operations can enhance continuous-variable entanglement distillation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Shengli; Loock, Peter van; Institute of Theoretical Physics I, Universitaet Erlangen-Nuernberg, Staudtstrasse 7/B2, DE-91058 Erlangen

    2011-12-15

    Entanglement distillation is a fundamental building block in long-distance quantum communication. Though known to be useless on their own for distilling Gaussian entangled states, local Gaussian operations may still help to improve non-Gaussian entanglement distillation schemes. Here we show that by applying local squeezing operations both the performance and the efficiency of existing distillation protocols can be enhanced. We find that such an enhancement through local Gaussian unitaries can be obtained even when the initially shared Gaussian entangled states are mixed, as, for instance, after their distribution through a lossy-fiber communication channel.

  19. Probabilistic homogenization of random composite with ellipsoidal particle reinforcement by the iterative stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Sokołowski, Damian; Kamiński, Marcin

    2018-01-01

    This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).

  20. Long-distance continuous-variable quantum key distribution with a Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien

    2011-12-15

    We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less

  1. Analysis of speckle and material properties in laider tracer

    NASA Astrophysics Data System (ADS)

    Ross, Jacob W.; Rigling, Brian D.; Watson, Edward A.

    2017-04-01

    The SAL simulation tool Laider Tracer models speckle: the random variation in intensity of an incident light beam across a rough surface. Within Laider Tracer, the speckle field is modeled as a 2-D array of jointly Gaussian random variables projected via ray tracing onto the scene of interest. Originally, all materials in Laider Tracer were treated as ideal diffuse scatterers, for which the far-field return computed uses the Lambertian Bidirectional Reflectance Distribution Function (BRDF). As presented here, we implement material properties into Laider Tracer via the Non-conventional Exploitation Factors Data System: a database of properties for thousands of different materials sampled at various wavelengths and incident angles. We verify the intensity behavior as a function of incident angle after material properties are added to the simulation.

  2. Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.

    PubMed

    da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L

    2014-01-01

    In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.

  3. On Nonlinear Functionals of Random Spherical Eigenfunctions

    NASA Astrophysics Data System (ADS)

    Marinucci, Domenico; Wigman, Igor

    2014-05-01

    We prove central limit theorems and Stein-like bounds for the asymptotic behaviour of nonlinear functionals of spherical Gaussian eigenfunctions. Our investigation combines asymptotic analysis of higher order moments for Legendre polynomials and, in addition, recent results on Malliavin calculus and total variation bounds for Gaussian subordinated fields. We discuss applications to geometric functionals like the defect and invariant statistics, e.g., polyspectra of isotropic spherical random fields. Both of these have relevance for applications, especially in an astrophysical environment.

  4. Gaussian measures of entanglement versus negativities: Ordering of two-mode Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; INFN Sezione di Napoli-Gruppo Collegato di Salerno, Via S. Allende, 84081 Baronissi, SA

    2005-09-15

    We study the entanglement of general (pure or mixed) two-mode Gaussian states of continuous-variable systems by comparing the two available classes of computable measures of entanglement: entropy-inspired Gaussian convex-roof measures and positive partial transposition-inspired measures (negativity and logarithmic negativity). We first review the formalism of Gaussian measures of entanglement, adopting the framework introduced in M. M. Wolf et al., Phys. Rev. A 69, 052320 (2004), where the Gaussian entanglement of formation was defined. We compute explicitly Gaussian measures of entanglement for two important families of nonsymmetric two-mode Gaussian state: namely, the states of extremal (maximal and minimal) negativities at fixedmore » global and local purities, introduced in G. Adesso et al., Phys. Rev. Lett. 92, 087901 (2004). This analysis allows us to compare the different orderings induced on the set of entangled two-mode Gaussian states by the negativities and by the Gaussian measures of entanglement. We find that in a certain range of values of the global and local purities (characterizing the covariance matrix of the corresponding extremal states), states of minimum negativity can have more Gaussian entanglement of formation than states of maximum negativity. Consequently, Gaussian measures and negativities are definitely inequivalent measures of entanglement on nonsymmetric two-mode Gaussian states, even when restricted to a class of extremal states. On the other hand, the two families of entanglement measures are completely equivalent on symmetric states, for which the Gaussian entanglement of formation coincides with the true entanglement of formation. Finally, we show that the inequivalence between the two families of continuous-variable entanglement measures is somehow limited. Namely, we rigorously prove that, at fixed negativities, the Gaussian measures of entanglement are bounded from below. Moreover, we provide some strong evidence suggesting that they are as well bounded from above.« less

  5. Implementing of lognormal humidity and cloud-related control variables for the NCEP GSI hybrid EnVAR Assimilation scheme.

    NASA Astrophysics Data System (ADS)

    Fletcher, S. J.; Kleist, D.; Ide, K.

    2017-12-01

    As the resolution of operational global numerical weather prediction system approach the meso-scale, then the assumption of Gaussianity for the errors at these scales may not valid. However, it is also true that synoptic variables that are positive definite in behavior, for example humidity, cannot be optimally analyzed with a Gaussian error structure, where the increment could force the full field to go negative. In this presentation we present the initial work of implementing a mixed Gaussian-lognormal approximation for the temperature and moisture variable in both the ensemble and variational component of the NCEP GSI hybrid EnVAR. We shall also lay the foundation for the implementation of the lognormal approximation to cloud related control variables to allow for a possible more consistent assimilation of cloudy radiances.

  6. Modeling of Bacillus spores: Inactivation and Outgrowth

    DTIC Science & Technology

    2011-03-01

    there has to be a suitable amount of repair enzymes viable to accomplish this. 34 Let ( )r eE t be the enzyme concentration for the spore population...was drawn from a Gaussian fitness distribution with mean, 0E and variance, 0 2 E , 2 0 0 2 0 0 0 0 ( 0 0 ) 2 ( , 1 ) , , 2 0. E E E E E E f e EE ...time progresses, the evolution of the distribution of ( )r eE t will approach the kill threshold. Since 0E is a random variable, ( )r eE t is also a

  7. Numerical Schemes for Dynamically Orthogonal Equations of Stochastic Fluid and Ocean Flows

    DTIC Science & Technology

    2011-11-03

    stages of the simulation (see §5.1). Also, because the pdf is discrete, we calculate the mo- ments using the biased estimator CYiYj ≈ 1q ∑ r Yr,iYr,j...independent random variables. For problems that require large p (e.g. non-Gaussian) and large s (e.g. large ocean or fluid simulations ), the number of...Sc = ν̂/K̂ is the Schmidt number which is the ratio of kinematic viscosity ν̂ to molecular diffusivity K̂ for the density field, ĝ′ = ĝ (ρ̂max−ρ̂min

  8. Neyman Pearson detection of K-distributed random variables

    NASA Astrophysics Data System (ADS)

    Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.

    2010-04-01

    In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.

  9. Calibration of a universal indicated turbulence system

    NASA Technical Reports Server (NTRS)

    Chapin, W. G.

    1977-01-01

    Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.

  10. Gaussian entanglement revisited

    NASA Astrophysics Data System (ADS)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  11. Fatigue assessment of vibrating rail vehicle bogie components under non-Gaussian random excitations using power spectral densities

    NASA Astrophysics Data System (ADS)

    Wolfsteiner, Peter; Breuer, Werner

    2013-10-01

    The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.

  12. Inflation in random Gaussian landscapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu

    2017-05-01

    We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less

  13. Realistic continuous-variable quantum teleportation with non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, F.; De Siena, S.; CNR-INFM Coherentia, Napoli, Italy, and CNISM and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi, SA

    2010-01-15

    We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resourcesmore » at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.« less

  14. Monogamy inequality for distributed gaussian entanglement.

    PubMed

    Hiroshima, Tohya; Adesso, Gerardo; Illuminati, Fabrizio

    2007-02-02

    We show that for all n-mode Gaussian states of continuous variable systems, the entanglement shared among n parties exhibits the fundamental monogamy property. The monogamy inequality is proven by introducing the Gaussian tangle, an entanglement monotone under Gaussian local operations and classical communication, which is defined in terms of the squared negativity in complete analogy with the case of n-qubit systems. Our results elucidate the structure of quantum correlations in many-body harmonic lattice systems.

  15. Surrogacy Assessment Using Principal Stratification and a Gaussian Copula Model

    PubMed Central

    Taylor, J.M.G.; Elliott, M.R.

    2014-01-01

    In clinical trials, a surrogate outcome (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Many methods of surrogacy validation rely on models for the conditional distribution of T given Z and S. However, S is a post-randomization variable, and unobserved, simultaneous predictors of S and T may exist, resulting in a non-causal interpretation. Frangakis and Rubin1 developed the concept of principal surrogacy, stratifying on the joint distribution of the surrogate marker under treatment and control to assess the association between the causal effects of treatment on the marker and the causal effects of treatment on the clinical outcome. Working within the principal surrogacy framework, we address the scenario of an ordinal categorical variable as a surrogate for a censored failure time true endpoint. A Gaussian copula model is used to model the joint distribution of the potential outcomes of T, given the potential outcomes of S. Because the proposed model cannot be fully identified from the data, we use a Bayesian estimation approach with prior distributions consistent with reasonable assumptions in the surrogacy assessment setting. The method is applied to data from a colorectal cancer clinical trial, previously analyzed by Burzykowski et al..2 PMID:24947559

  16. Surrogacy assessment using principal stratification and a Gaussian copula model.

    PubMed

    Conlon, Asc; Taylor, Jmg; Elliott, M R

    2017-02-01

    In clinical trials, a surrogate outcome ( S) can be measured before the outcome of interest ( T) and may provide early information regarding the treatment ( Z) effect on T. Many methods of surrogacy validation rely on models for the conditional distribution of T given Z and S. However, S is a post-randomization variable, and unobserved, simultaneous predictors of S and T may exist, resulting in a non-causal interpretation. Frangakis and Rubin developed the concept of principal surrogacy, stratifying on the joint distribution of the surrogate marker under treatment and control to assess the association between the causal effects of treatment on the marker and the causal effects of treatment on the clinical outcome. Working within the principal surrogacy framework, we address the scenario of an ordinal categorical variable as a surrogate for a censored failure time true endpoint. A Gaussian copula model is used to model the joint distribution of the potential outcomes of T, given the potential outcomes of S. Because the proposed model cannot be fully identified from the data, we use a Bayesian estimation approach with prior distributions consistent with reasonable assumptions in the surrogacy assessment setting. The method is applied to data from a colorectal cancer clinical trial, previously analyzed by Burzykowski et al.

  17. Secure Continuous Variable Teleportation and Einstein-Podolsky-Rosen Steering

    NASA Astrophysics Data System (ADS)

    He, Qiongyi; Rosales-Zárate, Laura; Adesso, Gerardo; Reid, Margaret D.

    2015-10-01

    We investigate the resources needed for secure teleportation of coherent states. We extend continuous variable teleportation to include quantum teleamplification protocols that allow nonunity classical gains and a preamplification or postattenuation of the coherent state. We show that, for arbitrary Gaussian protocols and a significant class of Gaussian resources, two-way steering is required to achieve a teleportation fidelity beyond the no-cloning threshold. This provides an operational connection between Gaussian steerability and secure teleportation. We present practical recipes suggesting that heralded noiseless preamplification may enable high-fidelity heralded teleportation, using minimally entangled yet steerable resources.

  18. Experimental implementation of non-Gaussian attacks on a continuous-variable quantum-key-distribution system.

    PubMed

    Lodewyck, Jérôme; Debuisschert, Thierry; García-Patrón, Raúl; Tualle-Brouri, Rosa; Cerf, Nicolas J; Grangier, Philippe

    2007-01-19

    An intercept-resend attack on a continuous-variable quantum-key-distribution protocol is investigated experimentally. By varying the interception fraction, one can implement a family of attacks where the eavesdropper totally controls the channel parameters. In general, such attacks add excess noise in the channel, and may also result in non-Gaussian output distributions. We implement and characterize the measurements needed to detect these attacks, and evaluate experimentally the information rates available to the legitimate users and the eavesdropper. The results are consistent with the optimality of Gaussian attacks resulting from the security proofs.

  19. A Heavy Tailed Expectation Maximization Hidden Markov Random Field Model with Applications to Segmentation of MRI

    PubMed Central

    Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego

    2017-01-01

    A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194

  20. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  1. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  2. Quantum steering of Gaussian states via non-Gaussian measurements

    NASA Astrophysics Data System (ADS)

    Ji, Se-Wan; Lee, Jaehak; Park, Jiyong; Nha, Hyunchul

    2016-07-01

    Quantum steering—a strong correlation to be verified even when one party or its measuring device is fully untrusted—not only provides a profound insight into quantum physics but also offers a crucial basis for practical applications. For continuous-variable (CV) systems, Gaussian states among others have been extensively studied, however, mostly confined to Gaussian measurements. While the fulfilment of Gaussian criterion is sufficient to detect CV steering, whether it is also necessary for Gaussian states is a question of fundamental importance in many contexts. This critically questions the validity of characterizations established only under Gaussian measurements like the quantification of steering and the monogamy relations. Here, we introduce a formalism based on local uncertainty relations of non-Gaussian measurements, which is shown to manifest quantum steering of some Gaussian states that Gaussian criterion fails to detect. To this aim, we look into Gaussian states of practical relevance, i.e. two-mode squeezed states under a lossy and an amplifying Gaussian channel. Our finding significantly modifies the characteristics of Gaussian-state steering so far established such as monogamy relations and one-way steering under Gaussian measurements, thus opening a new direction for critical studies beyond Gaussian regime.

  3. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  4. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  5. Mean First Passage Time and Stochastic Resonance in a Transcriptional Regulatory System with Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning

    The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.

  6. Extremality of Gaussian quantum states.

    PubMed

    Wolf, Michael M; Giedke, Geza; Cirac, J Ignacio

    2006-03-03

    We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.

  7. Random bursts determine dynamics of active filaments.

    PubMed

    Weber, Christoph A; Suzuki, Ryo; Schaller, Volker; Aranson, Igor S; Bausch, Andreas R; Frey, Erwin

    2015-08-25

    Constituents of living or synthetic active matter have access to a local energy supply that serves to keep the system out of thermal equilibrium. The statistical properties of such fluctuating active systems differ from those of their equilibrium counterparts. Using the actin filament gliding assay as a model, we studied how nonthermal distributions emerge in active matter. We found that the basic mechanism involves the interplay between local and random injection of energy, acting as an analog of a thermal heat bath, and nonequilibrium energy dissipation processes associated with sudden jump-like changes in the system's dynamic variables. We show here how such a mechanism leads to a nonthermal distribution of filament curvatures with a non-Gaussian shape. The experimental curvature statistics and filament relaxation dynamics are reproduced quantitatively by stochastic computer simulations and a simple kinetic model.

  8. Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape.

    PubMed

    Kaleta, Kamil; Lőrinczi, József

    2016-02-01

    The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state.

  9. Random bursts determine dynamics of active filaments

    PubMed Central

    Weber, Christoph A.; Suzuki, Ryo; Schaller, Volker; Aranson, Igor S.; Bausch, Andreas R.; Frey, Erwin

    2015-01-01

    Constituents of living or synthetic active matter have access to a local energy supply that serves to keep the system out of thermal equilibrium. The statistical properties of such fluctuating active systems differ from those of their equilibrium counterparts. Using the actin filament gliding assay as a model, we studied how nonthermal distributions emerge in active matter. We found that the basic mechanism involves the interplay between local and random injection of energy, acting as an analog of a thermal heat bath, and nonequilibrium energy dissipation processes associated with sudden jump-like changes in the system’s dynamic variables. We show here how such a mechanism leads to a nonthermal distribution of filament curvatures with a non-Gaussian shape. The experimental curvature statistics and filament relaxation dynamics are reproduced quantitatively by stochastic computer simulations and a simple kinetic model. PMID:26261319

  10. Evaluation of the non-Gaussianity of two-mode entangled states over a bosonic memory channel via cumulant theory and quadrature detection

    NASA Astrophysics Data System (ADS)

    Xiang, Shao-Hua; Wen, Wei; Zhao, Yu-Jing; Song, Ke-Hui

    2018-04-01

    We study the properties of the cumulants of multimode boson operators and introduce the phase-averaged quadrature cumulants as the measure of the non-Gaussianity of multimode quantum states. Using this measure, we investigate the non-Gaussianity of two classes of two-mode non-Gaussian states: photon-number entangled states and entangled coherent states traveling in a bosonic memory quantum channel. We show that such a channel can skew the distribution of two-mode quadrature variables, giving rise to a strongly non-Gaussian correlation. In addition, we provide a criterion to determine whether the distributions of these states are super- or sub-Gaussian.

  11. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  12. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  13. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  14. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  15. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  16. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  17. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  18. Continuous variable quantum cryptography using coherent states.

    PubMed

    Grosshans, Frédéric; Grangier, Philippe

    2002-02-04

    We propose several methods for quantum key distribution (QKD) based on the generation and transmission of random distributions of coherent or squeezed states, and we show that they are secure against individual eavesdropping attacks. These protocols require that the transmission of the optical line between Alice and Bob is larger than 50%, but they do not rely on "sub-shot-noise" features such as squeezing. Their security is a direct consequence of the no-cloning theorem, which limits the signal-to-noise ratio of possible quantum measurements on the transmission line. Our approach can also be used for evaluating various QKD protocols using light with Gaussian statistics.

  19. Digital image analysis of a turbulent flame

    NASA Astrophysics Data System (ADS)

    Zucherman, L.; Kawall, J. G.; Keffer, J. F.

    1988-01-01

    Digital image analysis of cine pictures of an unconfined rich premixed turbulent flame has been used to determine structural characteristics of the turbulent/non-turbulent interface of the flame. The results, comprising various moments of the interface position, probability density functions and correlation functions, establish that the instantaneous flame-interface position is essentially a Gaussian random variable with a superimposed quasi-periodical component. The latter is ascribable to a pulsation caused by the convection and the stretching of ring vortices present within the flame. To a first approximation, the flame can be considered similar to a three-dimensional axisymmetric turbulent jet, with superimposed ring vortices, in which combustion occurs.

  20. Bivariate- distribution for transition matrix elements in Breit-Wigner to Gaussian domains of interacting particle systems.

    PubMed

    Kota, V K B; Chavda, N D; Sahu, R

    2006-04-01

    Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.

  1. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  2. Separation of the low-frequency atmospheric variability into non-Gaussian multidimensional sources by Independent Subspace Analysis

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Ribeiro, Andreia

    2016-04-01

    An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.

  3. Quantification of Gaussian quantum steering.

    PubMed

    Kogias, Ioannis; Lee, Antony R; Ragy, Sammy; Adesso, Gerardo

    2015-02-13

    Einstein-Podolsky-Rosen steering incarnates a useful nonclassical correlation which sits between entanglement and Bell nonlocality. While a number of qualitative steering criteria exist, very little has been achieved for what concerns quantifying steerability. We introduce a computable measure of steering for arbitrary bipartite Gaussian states of continuous variable systems. For two-mode Gaussian states, the measure reduces to a form of coherent information, which is proven never to exceed entanglement, and to reduce to it on pure states. We provide an operational connection between our measure and the key rate in one-sided device-independent quantum key distribution. We further prove that Peres' conjecture holds in its stronger form within the fully Gaussian regime: namely, steering bound entangled Gaussian states by Gaussian measurements is impossible.

  4. Polynomial approximation of non-Gaussian unitaries by counting one photon at a time

    NASA Astrophysics Data System (ADS)

    Arzani, Francesco; Treps, Nicolas; Ferrini, Giulia

    2017-05-01

    In quantum computation with continuous-variable systems, quantum advantage can only be achieved if some non-Gaussian resource is available. Yet, non-Gaussian unitary evolutions and measurements suited for computation are challenging to realize in the laboratory. We propose and analyze two methods to apply a polynomial approximation of any unitary operator diagonal in the amplitude quadrature representation, including non-Gaussian operators, to an unknown input state. Our protocols use as a primary non-Gaussian resource a single-photon counter. We use the fidelity of the transformation with the target one on Fock and coherent states to assess the quality of the approximate gate.

  5. Drop Spreading with Random Viscosity

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Jensen, Oliver

    2016-11-01

    Airway mucus acts as a barrier to protect the lung. However as a biological material, its physical properties are known imperfectly and can be spatially heterogeneous. In this study we assess the impact of these uncertainties on the rate of spreading of a drop (representing an inhaled aerosol) over a mucus film. We model the film as Newtonian, having a viscosity that depends linearly on the concentration of a passive solute (a crude proxy for mucin proteins). Given an initial random solute (and hence viscosity) distribution, described as a Gaussian random field with a given correlation structure, we seek to quantify the uncertainties in outcomes as the drop spreads. Using lubrication theory, we describe the spreading of the drop in terms of a system of coupled nonlinear PDEs governing the evolution of film height and the vertically-averaged solute concentration. We perform Monte Carlo simulations to predict the variability in the drop centre location and width (1D) or area (2D). We show how simulation results are well described (at much lower computational cost) by a low-order model using a weak disorder expansion. Our results show for example how variability in the drop location is a non-monotonic function of the solute correlation length increases. Engineering and Physical Sciences Research Council.

  6. Continuous-variable quantum key distribution with Gaussian source noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yujie; Peng Xiang; Yang Jian

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  7. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  8. A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"

    NASA Astrophysics Data System (ADS)

    Jansen, Malte F.

    2017-02-01

    This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.

  9. Gaussian maximally multipartite-entangled states

    NASA Astrophysics Data System (ADS)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  10. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    PubMed

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  11. A Bayesian, generalized frailty model for comet assays.

    PubMed

    Ghebretinsae, Aklilu Habteab; Faes, Christel; Molenberghs, Geert; De Boeck, Marlies; Geys, Helena

    2013-05-01

    This paper proposes a flexible modeling approach for so-called comet assay data regularly encountered in preclinical research. While such data consist of non-Gaussian outcomes in a multilevel hierarchical structure, traditional analyses typically completely or partly ignore this hierarchical nature by summarizing measurements within a cluster. Non-Gaussian outcomes are often modeled using exponential family models. This is true not only for binary and count data, but also for, example, time-to-event outcomes. Two important reasons for extending this family are for (1) the possible occurrence of overdispersion, meaning that the variability in the data may not be adequately described by the models, which often exhibit a prescribed mean-variance link, and (2) the accommodation of a hierarchical structure in the data, owing to clustering in the data. The first issue is dealt with through so-called overdispersion models. Clustering is often accommodated through the inclusion of random subject-specific effects. Though not always, one conventionally assumes such random effects to be normally distributed. In the case of time-to-event data, one encounters, for example, the gamma frailty model (Duchateau and Janssen, 2007 ). While both of these issues may occur simultaneously, models combining both are uncommon. Molenberghs et al. ( 2010 ) proposed a broad class of generalized linear models accommodating overdispersion and clustering through two separate sets of random effects. Here, we use this method to model data from a comet assay with a three-level hierarchical structure. Although a conjugate gamma random effect is used for the overdispersion random effect, both gamma and normal random effects are considered for the hierarchical random effect. Apart from model formulation, we place emphasis on Bayesian estimation. Our proposed method has an upper hand over the traditional analysis in that it (1) uses the appropriate distribution stipulated in the literature; (2) deals with the complete hierarchical nature; and (3) uses all information instead of summary measures. The fit of the model to the comet assay is compared against the background of more conventional model fits. Results indicate the toxicity of 1,2-dimethylhydrazine dihydrochloride at different dose levels (low, medium, and high).

  12. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  13. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  14. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  15. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  16. Non-Gaussianity of Low Frequency Heart Rate Variability and Sympathetic Activation: Lack of Increases in Multiple System Atrophy and Parkinson Disease

    PubMed Central

    Kiyono, Ken; Hayano, Junichiro; Kwak, Shin; Watanabe, Eiichi; Yamamoto, Yoshiharu

    2012-01-01

    The correlates of indices of long-term ambulatory heart rate variability (HRV) of the autonomic nervous system have not been completely understood. In this study, we evaluated conventional HRV indices, obtained from the daytime (12:00–18:00) Holter recording, and a recently proposed non-Gaussianity index (λ; Kiyono et al., 2008) in 12 patients with multiple system atrophy (MSA) and 10 patients with Parkinson disease (PD), known to have varying degrees of cardiac vagal and sympathetic dysfunction. Compared with the age-matched healthy control group, the MSA patients showed significantly decreased HRV, most probably reflecting impaired vagal heart rate control, but the PD patients did not show such reduced variability. In both MSA and PD patients, the low-to-high frequency (LF/HF) ratio and the short-term fractal exponent α1, suggested to reflect the sympathovagal balance, were significantly decreased, as observed in congestive heart failure (CHF) patients with sympathetic overdrive. In contrast, the analysis of the non-Gaussianity index λ showed that a marked increase in intermittent and non-Gaussian HRV observed in the CHF patients was not observed in the MSA and PD patients with sympathetic dysfunction. These findings provide additional evidence for the relation between the non-Gaussian intermittency of HRV and increased sympathetic activity. PMID:22371705

  17. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  18. Entropy factor for randomness quantification in neuronal data.

    PubMed

    Rajdl, K; Lansky, P; Kostal, L

    2017-11-01

    A novel measure of neural spike train randomness, an entropy factor, is proposed. It is based on the Shannon entropy of the number of spikes in a time window and can be seen as an analogy to the Fano factor. Theoretical properties of the new measure are studied for equilibrium renewal processes and further illustrated on gamma and inverse Gaussian probability distributions of interspike intervals. Finally, the entropy factor is evaluated from the experimental records of spontaneous activity in macaque primary visual cortex and compared to its theoretical behavior deduced for the renewal process models. Both theoretical and experimental results show substantial differences between the Fano and entropy factors. Rather paradoxically, an increase in the variability of spike count is often accompanied by an increase of its predictability, as evidenced by the entropy factor. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Analysis of Flow and Transport in non-Gaussian Heterogeneous Formations Using a Generalized Sub-Gaussian Model

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Neuman, S. P.

    2016-12-01

    Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.

  20. Increased intra-individual reaction time variability in attention-deficit/hyperactivity disorder across response inhibition tasks with different cognitive demands.

    PubMed

    Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H

    2009-10-01

    One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.

  1. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  2. Emergence of Multiscaling in a Random-Force Stirred Fluid

    NASA Astrophysics Data System (ADS)

    Yakhot, Victor; Donzis, Diego

    2017-07-01

    We consider the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that, due to multiscaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different "Reynolds numbers" reflecting a multitude of anomalous scaling exponents. The theoretically predicted transition disappears at Rλ≤3 . The developed theory is in quantitative agreement with the outcome of large-scale numerical simulations.

  3. Characterizing CDOM Spectral Variability Across Diverse Regions and Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Grunert, Brice K.; Mouw, Colleen B.; Ciochetto, Audrey B.

    2018-01-01

    Satellite remote sensing of colored dissolved organic matter (CDOM) has focused on CDOM absorption (aCDOM) at a reference wavelength, as its magnitude provides insight into the underwater light field and large-scale biogeochemical processes. CDOM spectral slope, SCDOM, has been treated as a constant or semiconstant parameter in satellite retrievals of aCDOM despite significant regional and temporal variabilities. SCDOM and other optical metrics provide insights into CDOM composition, processing, food web dynamics, and carbon cycling. To date, much of this work relies on fluorescence techniques or aCDOM in spectral ranges unavailable to current and planned satellite sensors (e.g., <300 nm). In preparation for anticipated future hyperspectral satellite missions, we take the first step here of exploring global variability in SCDOM and fit deviations in the aCDOM spectra using the recently proposed Gaussian decomposition method. From this, we investigate if global variability in retrieved SCDOM and Gaussian components is significant and regionally distinct. We iteratively decreased the spectral range considered and analyzed the number, location, and magnitude of fitted Gaussian components to understand if a reduced spectral range impacts information obtained within a common spectral window. We compared the fitted slope from the Gaussian decomposition method to absorption-based indices that indicate CDOM composition to determine the ability of satellite-derived slope to inform the analysis and modeling of large-scale biogeochemical processes. Finally, we present implications of the observed variability for remote sensing of CDOM characteristics via SCDOM.

  4. Dynamical transition for a particle in a squared Gaussian potential

    NASA Astrophysics Data System (ADS)

    Touya, C.; Dean, D. S.

    2007-02-01

    We study the problem of a Brownian particle diffusing in finite dimensions in a potential given by ψ = phi2/2 where phi is Gaussian random field. Exact results for the diffusion constant in the high temperature phase are given in one and two dimensions and it is shown to vanish in a power-law fashion at the dynamical transition temperature. Our results are confronted with numerical simulations where the Gaussian field is constructed, in a standard way, as a sum over random Fourier modes. We show that when the number of Fourier modes is finite the low temperature diffusion constant becomes non-zero and has an Arrhenius form. Thus we have a simple model with a fully understood finite size scaling theory for the dynamical transition. In addition we analyse the nature of the anomalous diffusion in the low temperature regime and show that the anomalous exponent agrees with that predicted by a trap model.

  5. Scattering of Gaussian Beams by Disordered Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.

    2016-01-01

    A frequently observed characteristic of electromagnetic scattering by a disordered particulate medium is the absence of pronounced speckles in angular patterns of the scattered light. It is known that such diffuse speckle-free scattering patterns can be caused by averaging over randomly changing particle positions and/or over a finite spectral range. To get further insight into the possible physical causes of the absence of speckles, we use the numerically exact superposition T-matrix solver of the Maxwell equations and analyze the scattering of plane-wave and Gaussian beams by representative multi-sphere groups. We show that phase and amplitude variations across an incident Gaussian beam do not serve to extinguish the pronounced speckle pattern typical of plane-wave illumination of a fixed multi-particle group. Averaging over random particle positions and/or over a finite spectral range is still required to generate the classical diffuse speckle-free regime.

  6. Theoretical analysis of non-Gaussian heterogeneity effects on subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Guadagnini, Alberto; Neuman, Shlomo P.

    2017-04-01

    Much of the stochastic groundwater literature is devoted to the analysis of flow and transport in Gaussian or multi-Gaussian log hydraulic conductivity (or transmissivity) fields, Y(x)=ln\\func K(x) (x being a position vector), characterized by one or (less frequently) a multiplicity of spatial correlation scales. Yet Y and many other variables and their (spatial or temporal) increments, ΔY, are known to be generally non-Gaussian. One common manifestation of non-Gaussianity is that whereas frequency distributions of Y often exhibit mild peaks and light tails, those of increments ΔY are generally symmetric with peaks that grow sharper, and tails that become heavier, as separation scale or lag between pairs of Y values decreases. A statistical model that captures these disparate, scale-dependent distributions of Y and ΔY in a unified and consistent manner has been recently proposed by us. This new "generalized sub-Gaussian (GSG)" model has the form Y(x)=U(x)G(x) where G(x) is (generally, but not necessarily) a multiscale Gaussian random field and U(x) is a nonnegative subordinator independent of G. The purpose of this paper is to explore analytically, in an elementary manner, lead-order effects that non-Gaussian heterogeneity described by the GSG model have on the stochastic description of flow and transport. Recognizing that perturbation expansion of hydraulic conductivity K=eY diverges when Y is sub-Gaussian, we render the expansion convergent by truncating Y's domain of definition. We then demonstrate theoretically and illustrate by way of numerical examples that, as the domain of truncation expands, (a) the variance of truncated Y (denoted by Yt) approaches that of Y and (b) the pdf (and thereby moments) of Yt increments approach those of Y increments and, as a consequence, the variogram of Yt approaches that of Y. This in turn guarantees that perturbing Kt=etY to second order in σYt (the standard deviation of Yt) yields results which approach those we obtain upon perturbing K=eY to second order in σY even as the corresponding series diverges. Our analysis is rendered mathematically tractable by considering mean-uniform steady state flow in an unbounded, two-dimensional domain of mildly heterogeneous Y with a single-scale function G having an isotropic exponential covariance. Results consist of expressions for (a) lead-order autocovariance and cross-covariance functions of hydraulic head, velocity, and advective particle displacement and (b) analogues of preasymptotic as well as asymptotic Fickian dispersion coefficients. We compare these theoretically and graphically with corresponding expressions developed in the literature for Gaussian Y. We find the former to differ from the latter by a factor k = /2 ( <> denoting ensemble expectation) and the GSG covariance of longitudinal velocity to contain an additional nugget term depending on this same factor. In the limit as Y becomes Gaussian, k reduces to one and the nugget term drops out.

  7. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Repeat-until-success cubic phase gate for universal continuous-variable quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Kevin; Pooser, Raphael; Siopsis, George

    2015-03-24

    We report that to achieve universal quantum computation using continuous variables, one needs to jump out of the set of Gaussian operations and have a non-Gaussian element, such as the cubic phase gate. However, such a gate is currently very difficult to implement in practice. Here we introduce an experimentally viable “repeat-until-success” approach to generating the cubic phase gate, which is achieved using sequential photon subtractions and Gaussian operations. Ultimately, we find that our scheme offers benefits in terms of the expected time until success, as well as the fact that we do not require any complex off-line resource state,more » although we require a primitive quantum memory.« less

  9. Continuous-variable measurement-device-independent quantum key distribution with photon subtraction

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua

    2018-04-01

    It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.

  10. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Connectivity ranking of heterogeneous random conductivity models

    NASA Astrophysics Data System (ADS)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  12. A qualitative assessment of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.

  13. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  14. Understanding Past Population Dynamics: Bayesian Coalescent-Based Modeling with Covariates

    PubMed Central

    Gill, Mandev S.; Lemey, Philippe; Bennett, Shannon N.; Biek, Roman; Suchard, Marc A.

    2016-01-01

    Effective population size characterizes the genetic variability in a population and is a parameter of paramount importance in population genetics and evolutionary biology. Kingman’s coalescent process enables inference of past population dynamics directly from molecular sequence data, and researchers have developed a number of flexible coalescent-based models for Bayesian nonparametric estimation of the effective population size as a function of time. Major goals of demographic reconstruction include identifying driving factors of effective population size, and understanding the association between the effective population size and such factors. Building upon Bayesian nonparametric coalescent-based approaches, we introduce a flexible framework that incorporates time-varying covariates that exploit Gaussian Markov random fields to achieve temporal smoothing of effective population size trajectories. To approximate the posterior distribution, we adapt efficient Markov chain Monte Carlo algorithms designed for highly structured Gaussian models. Incorporating covariates into the demographic inference framework enables the modeling of associations between the effective population size and covariates while accounting for uncertainty in population histories. Furthermore, it can lead to more precise estimates of population dynamics. We apply our model to four examples. We reconstruct the demographic history of raccoon rabies in North America and find a significant association with the spatiotemporal spread of the outbreak. Next, we examine the effective population size trajectory of the DENV-4 virus in Puerto Rico along with viral isolate count data and find similar cyclic patterns. We compare the population history of the HIV-1 CRF02_AG clade in Cameroon with HIV incidence and prevalence data and find that the effective population size is more reflective of incidence rate. Finally, we explore the hypothesis that the population dynamics of musk ox during the Late Quaternary period were related to climate change. [Coalescent; effective population size; Gaussian Markov random fields; phylodynamics; phylogenetics; population genetics. PMID:27368344

  15. On the evaluation of derivatives of Gaussian integrals

    NASA Technical Reports Server (NTRS)

    Helgaker, Trygve; Taylor, Peter R.

    1992-01-01

    We show that by a suitable change of variables, the derivatives of molecular integrals over Gaussian-type functions required for analytic energy derivatives can be evaluated with significantly less computational effort than current formulations. The reduction in effort increases with the order of differentiation.

  16. Entanglement-distillation attack on continuous-variable quantum key distribution in a turbulent atmospheric channel

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan

    2017-08-01

    The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.

  17. Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar

    NASA Technical Reports Server (NTRS)

    Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.

  18. Gaussification and entanglement distillation of continuous-variable systems: a unifying picture.

    PubMed

    Campbell, Earl T; Eisert, Jens

    2012-01-13

    Distillation of entanglement using only Gaussian operations is an important primitive in quantum communication, quantum repeater architectures, and distributed quantum computing. Existing distillation protocols for continuous degrees of freedom are only known to converge to a Gaussian state when measurements yield precisely the vacuum outcome. In sharp contrast, non-Gaussian states can be deterministically converted into Gaussian states while preserving their second moments, albeit by usually reducing their degree of entanglement. In this work-based on a novel instance of a noncommutative central limit theorem-we introduce a picture general enough to encompass the known protocols leading to Gaussian states, and new classes of protocols including multipartite distillation. This gives the experimental option of balancing the merits of success probability against entanglement produced.

  19. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  20. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States.

    PubMed

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-03

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  1. Random medium model for cusping of plane waves.

    PubMed

    Li, Jia; Korotkova, Olga

    2017-09-01

    We introduce a model for a three-dimensional (3D) Schell-type stationary medium whose degree of potential's correlation satisfies the Fractional Multi-Gaussian (FMG) function. Compared with the scattered profile produced by the Gaussian Schell-model (GSM) medium, the Fractional Multi-Gaussian Schell-model (FMGSM) medium gives rise to a sharp concave intensity apex in the scattered field. This implies that the FMGSM medium also accounts for a larger than Gaussian's power in the bucket (PIB) in the forward scattering direction, hence being a better candidate than the GSM medium for generating highly-focused (cusp-like) scattered profiles in the far zone. Compared to other mathematical models for the medium's correlation function which can produce similar cusped scattered profiles the FMG function offers unprecedented tractability being the weighted superposition of Gaussian functions. Our results provide useful applications to energy counter problems and particle manipulation by weakly scattered fields.

  2. Landmark-free statistical analysis of the shape of plant leaves.

    PubMed

    Laga, Hamid; Kurtek, Sebastian; Srivastava, Anuj; Miklavcic, Stanley J

    2014-12-21

    The shapes of plant leaves are important features to biologists, as they can help in distinguishing plant species, measuring their health, analyzing their growth patterns, and understanding relations between various species. Most of the methods that have been developed in the past focus on comparing the shape of individual leaves using either descriptors or finite sets of landmarks. However, descriptor-based representations are not invertible and thus it is often hard to map descriptor variability into shape variability. On the other hand, landmark-based techniques require automatic detection and registration of the landmarks, which is very challenging in the case of plant leaves that exhibit high variability within and across species. In this paper, we propose a statistical model based on the Squared Root Velocity Function (SRVF) representation and the Riemannian elastic metric of Srivastava et al. (2011) to model the observed continuous variability in the shape of plant leaves. We treat plant species as random variables on a non-linear shape manifold and thus statistical summaries, such as means and covariances, can be computed. One can then study the principal modes of variations and characterize the observed shapes using probability density models, such as Gaussians or Mixture of Gaussians. We demonstrate the usage of such statistical model for (1) efficient classification of individual leaves, (2) the exploration of the space of plant leaf shapes, which is important in the study of population-specific variations, and (3) comparing entire plant species, which is fundamental to the study of evolutionary relationships in plants. Our approach does not require descriptors or landmarks but automatically solves for the optimal registration that aligns a pair of shapes. We evaluate the performance of the proposed framework on publicly available benchmarks such as the Flavia, the Swedish, and the ImageCLEF2011 plant leaf datasets. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. On the identification of Dragon Kings among extreme-valued outliers

    NASA Astrophysics Data System (ADS)

    Riva, M.; Neuman, S. P.; Guadagnini, A.

    2013-07-01

    Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.

  4. Quantification and scaling of multipartite entanglement in continuous variable systems.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-11-26

    We present a theoretical method to determine the multipartite entanglement between different partitions of multimode, fully or partially symmetric Gaussian states of continuous variable systems. For such states, we determine the exact expression of the logarithmic negativity and show that it coincides with that of equivalent two-mode Gaussian states. Exploiting this reduction, we demonstrate the scaling of the multipartite entanglement with the number of modes and its reliable experimental estimate by direct measurements of the global and local purities.

  5. Time reversibility of intracranial human EEG recordings in mesial temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    van der Heyden, M. J.; Diks, C.; Pijn, J. P. M.; Velis, D. N.

    1996-02-01

    Intracranial electroencephalograms from patients suffering from mesial temporal lobe epilepsy were tested for time reversibility. If the recorded time series is irreversible, the input of the recording system cannot be a realisation of a linear Gaussian random process. We confirmed experimentally that the measurement equipment did not introduce irreversibility in the recorded output when the input was a realisation of a linear Gaussian random process. In general, the non-seizure recordings are reversible, whereas the seizure recordings are irreversible. These results suggest that time reversibility is a useful property for the characterisation of human intracranial EEG recordings in mesial temporal lobe epilepsy.

  6. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  7. Mathematic model analysis of Gaussian beam propagation through an arbitrary thickness random phase screen.

    PubMed

    Tian, Yuzhen; Guo, Jin; Wang, Rui; Wang, Tingfeng

    2011-09-12

    In order to research the statistical properties of Gaussian beam propagation through an arbitrary thickness random phase screen for adaptive optics and laser communication application in the laboratory, we establish mathematic models of statistical quantities, which are based on the Rytov method and the thin phase screen model, involved in the propagation process. And the analytic results are developed for an arbitrary thickness phase screen based on the Kolmogorov power spectrum. The comparison between the arbitrary thickness phase screen and the thin phase screen shows that it is more suitable for our results to describe the generalized case, especially the scintillation index.

  8. Cramer-Rao Bound for Gaussian Random Processes and Applications to Radar Processing of Atmospheric Signals

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod

    1993-01-01

    Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.

  9. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  10. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  11. A novel approach to assess the treatment response using Gaussian random field in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Mengdie; Guo, Ning; Hu, Guangshu

    2016-02-15

    Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robustmore » approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response assessment utilizing local average method based on random field. The accuracy of the estimation was validated in terms of Euler distance and correlation coefficient. Results: It is shown that the performance of therapy response assessment is significantly improved by the introduction of variance with a higher area under the curve (97.3%) than SUVmean (91.4%) and SUVmax (82.0%). In addition, the FPR estimation serves as a good prediction for the specificity of the proposed method, consistent with simulation outcome with ∼1 correlation coefficient. Conclusions: In this work, the authors developed a method to evaluate therapy response from PET images, which were modeled as Gaussian random field. The digital phantom simulations demonstrated that the proposed method achieved a large reduction in statistical variability through incorporating knowledge of the variance of the original Gaussian random field. The proposed method has the potential to enable prediction of early treatment response and shows promise for application to clinical practice. In future work, the authors will report on the robustness of the estimation theory for application to clinical practice of therapy response evaluation, which pertains to binary discrimination tasks at a fixed location in the image such as detection of small and weak lesion.« less

  12. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    PubMed

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  13. Subharmonic response of a single-degree-of-freedom nonlinear vibro-impact system to a narrow-band random excitation.

    PubMed

    Haiwu, Rong; Wang, Xiangdong; Xu, Wei; Fang, Tong

    2009-08-01

    The subharmonic response of single-degree-of-freedom nonlinear vibro-impact oscillator with a one-sided barrier to narrow-band random excitation is investigated. The narrow-band random excitation used here is a filtered Gaussian white noise. The analysis is based on a special Zhuravlev transformation, which reduces the system to one without impacts, or velocity jumps, thereby permitting the applications of asymptotic averaging over the "fast" variables. The averaged stochastic equations are solved exactly by the method of moments for the mean-square response amplitude for the case of linear system with zero offset. A perturbation-based moment closure scheme is proposed and the formula of the mean-square amplitude is obtained approximately for the case of linear system with nonzero offset. The perturbation-based moment closure scheme is used once again to obtain the algebra equation of the mean-square amplitude of the response for the case of nonlinear system. The effects of damping, detuning, nonlinear intensity, bandwidth, and magnitudes of random excitations are analyzed. The theoretical analyses are verified by numerical results. Theoretical analyses and numerical simulations show that the peak amplitudes may be strongly reduced at large detunings or large nonlinear intensity.

  14. Different operational meanings of continuous variable Gaussian entanglement criteria and Bell inequalities

    NASA Astrophysics Data System (ADS)

    Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.

    2014-07-01

    Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).

  15. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction

    NASA Astrophysics Data System (ADS)

    Leverrier, Anthony

    2017-05-01

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U (n ) (instead of the symmetric group Sn as in usual de Finetti theorems), and by introducing generalized S U (2 ,2 ) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  16. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction.

    PubMed

    Leverrier, Anthony

    2017-05-19

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual de Finetti theorems), and by introducing generalized SU(2,2) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  17. Characterization of collective Gaussian attacks and security of coherent-state quantum cryptography.

    PubMed

    Pirandola, Stefano; Braunstein, Samuel L; Lloyd, Seth

    2008-11-14

    We provide a simple description of the most general collective Gaussian attack in continuous-variable quantum cryptography. In the scenario of such general attacks, we analyze the asymptotic secret-key rates which are achievable with coherent states, joint measurements of the quadratures and one-way classical communication.

  18. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  19. Entanglement and purity of two-mode Gaussian states in noisy channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serafini, Alessio; Illuminati, Fabrizio; De Siena, Silvio

    2004-02-01

    We study the evolution of purity, entanglement, and total correlations of general two-mode continuous variable Gaussian states in arbitrary uncorrelated Gaussian environments. The time evolution of purity, von Neumann entropy, logarithmic negativity, and mutual information is analyzed for a wide range of initial conditions. In general, we find that a local squeezing of the bath leads to a faster degradation of purity and entanglement, while it can help to preserve the mutual information between the modes.

  20. Transfer of non-Gaussian quantum states of mechanical oscillator to light

    NASA Astrophysics Data System (ADS)

    Filip, Radim; Rakhubovsky, Andrey A.

    2015-11-01

    Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.

  1. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  2. Integral momenta of vortex Bessel-Gaussian beams in turbulent atmosphere.

    PubMed

    Lukin, Igor P

    2016-04-20

    The orbital angular momentum of vortex Bessel-Gaussian beams propagating in turbulent atmosphere is studied theoretically. The field of an optical beam is determined through the solution of the paraxial wave equation for a randomly inhomogeneous medium with fluctuations of the refraction index of the turbulent atmosphere. Peculiarities in the behavior of the total power of the vortex Bessel-Gaussian beam at the receiver (or transmitter) are examined. The dependence of the total power of the vortex Bessel-Gaussian beam on optical beam parameters, namely, the transverse wave number of optical radiation, amplitude factor radius, and, especially, topological charge of the optical beam, is analyzed in detail. It turns out that the mean value of the orbital angular momentum of the vortex Bessel-Gaussian beam remains constant during propagation in the turbulent atmosphere. It is shown that the variance of fluctuations of the orbital angular momentum of the vortex Bessel-Gaussian beam propagating in turbulent atmosphere calculated with the "mean-intensity" approximation is equal to zero identically. Thus, it is possible to declare confidently that the variance of fluctuations of the orbital angular momentum of the vortex Bessel-Gaussian beam in turbulent atmosphere is not very large.

  3. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  4. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  5. Robustness analysis of superpixel algorithms to image blur, additive Gaussian noise, and impulse noise

    NASA Astrophysics Data System (ADS)

    Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming

    2017-11-01

    Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.

  6. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  7. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  8. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  9. An Effective Post-Filtering Framework for 3-D PET Image Denoising Based on Noise and Sensitivity Characteristics

    NASA Astrophysics Data System (ADS)

    Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom

    2015-02-01

    Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.

  10. Propagation properties of cylindrical sinc Gaussian beam

    NASA Astrophysics Data System (ADS)

    Eyyuboğlu, Halil T.; Bayraktar, Mert

    2016-09-01

    We investigate the propagation properties of cylindrical sinc Gaussian beam in turbulent atmosphere. Since an analytic solution is hardly derivable, the study is carried out with the aid of random phase screens. Evolutions of the beam intensity profile, beam size and kurtosis parameter are analysed. It is found that on the source plane, cylindrical sinc Gaussian beam has a dark hollow appearance, where the side lobes also start to emerge with increase in width parameter and Gaussian source size. During propagation, beams with small width and Gaussian source size exhibit off-axis behaviour, losing the dark hollow shape, accumulating the intensity asymmetrically on one side, whereas those with large width and Gaussian source size retain dark hollow appearance even at long propagation distances. It is seen that the beams with large widths expand more in beam size than the ones with small widths. The structure constant values chosen do not seem to alter this situation. The kurtosis parameters of the beams having small widths are seen to be larger than the ones with the small widths. Again the choice of the structure constant does not change this trend.

  11. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  12. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  13. Continuous-variable quantum key distribution protocols over noisy channels.

    PubMed

    García-Patrón, Raúl; Cerf, Nicolas J

    2009-04-03

    A continuous-variable quantum key distribution protocol based on squeezed states and heterodyne detection is introduced and shown to attain higher secret key rates over a noisy line than any other one-way Gaussian protocol. This increased resistance to channel noise can be understood as resulting from purposely adding noise to the signal that is converted into the secret key. This notion of noise-enhanced tolerance to noise also provides a better physical insight into the poorly understood discrepancies between the previously defined families of Gaussian protocols.

  14. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  15. Quantifying non-Markovianity of continuous-variable Gaussian dynamical maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasile, Ruggero; Maniscalco, Sabrina; Paris, Matteo G. A.

    2011-11-15

    We introduce a non-Markovianity measure for continuous-variable open quantum systems based on the idea put forward in H.-P. Breuer et al.[Phys. Rev. Lett. 103, 210401 (2009);], that is, by quantifying the flow of information from the environment back to the open system. Instead of the trace distance we use here the fidelity to assess distinguishability of quantum states. We employ our measure to evaluate non-Markovianity of two paradigmatic Gaussian channels: the purely damping channel and the quantum Brownian motion channel with Ohmic environment. We consider different classes of Gaussian states and look for pairs of states maximizing the backflow ofmore » information. For coherent states we find simple analytical solutions, whereas for squeezed states we provide both exact numerical and approximate analytical solutions in the weak coupling limit.« less

  16. Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2018-06-01

    Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.

  17. Simulation of foulant bioparticle topography based on Gaussian process and its implications for interface behavior research

    NASA Astrophysics Data System (ADS)

    Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang

    2018-03-01

    Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.

  18. Radiation Transport in Random Media With Large Fluctuations

    NASA Astrophysics Data System (ADS)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  19. Capturing the dynamics of response variability in the brain in ADHD.

    PubMed

    van Belle, Janna; van Raalten, Tamar; Bos, Dienke J; Zandbelt, Bram B; Oranje, Bob; Durston, Sarah

    2015-01-01

    ADHD is characterized by increased intra-individual variability in response times during the performance of cognitive tasks. However, little is known about developmental changes in intra-individual variability, and how these changes relate to cognitive performance. Twenty subjects with ADHD aged 7-24 years and 20 age-matched, typically developing controls participated in an fMRI-scan while they performed a go-no-go task. We fit an ex-Gaussian distribution on the response distribution to objectively separate extremely slow responses, related to lapses of attention, from variability on fast responses. We assessed developmental changes in these intra-individual variability measures, and investigated their relation to no-go performance. Results show that the ex-Gaussian measures were better predictors of no-go performance than traditional measures of reaction time. Furthermore, we found between-group differences in the change in ex-Gaussian parameters with age, and their relation to task performance: subjects with ADHD showed age-related decreases in their variability on fast responses (sigma), but not in lapses of attention (tau), whereas control subjects showed a decrease in both measures of variability. For control subjects, but not subjects with ADHD, this age-related reduction in variability was predictive of task performance. This group difference was reflected in neural activation: for typically developing subjects, the age-related decrease in intra-individual variability on fast responses (sigma) predicted activity in the dorsal anterior cingulate gyrus (dACG), whereas for subjects with ADHD, activity in this region was related to improved no-go performance with age, but not to intra-individual variability. These data show that using more sophisticated measures of intra-individual variability allows the capturing of the dynamics of task performance and associated neural changes not permitted by more traditional measures.

  20. Capturing the dynamics of response variability in the brain in ADHD

    PubMed Central

    van Belle, Janna; van Raalten, Tamar; Bos, Dienke J.; Zandbelt, Bram B.; Oranje, Bob; Durston, Sarah

    2014-01-01

    ADHD is characterized by increased intra-individual variability in response times during the performance of cognitive tasks. However, little is known about developmental changes in intra-individual variability, and how these changes relate to cognitive performance. Twenty subjects with ADHD aged 7–24 years and 20 age-matched, typically developing controls participated in an fMRI-scan while they performed a go-no-go task. We fit an ex-Gaussian distribution on the response distribution to objectively separate extremely slow responses, related to lapses of attention, from variability on fast responses. We assessed developmental changes in these intra-individual variability measures, and investigated their relation to no-go performance. Results show that the ex-Gaussian measures were better predictors of no-go performance than traditional measures of reaction time. Furthermore, we found between-group differences in the change in ex-Gaussian parameters with age, and their relation to task performance: subjects with ADHD showed age-related decreases in their variability on fast responses (sigma), but not in lapses of attention (tau), whereas control subjects showed a decrease in both measures of variability. For control subjects, but not subjects with ADHD, this age-related reduction in variability was predictive of task performance. This group difference was reflected in neural activation: for typically developing subjects, the age-related decrease in intra-individual variability on fast responses (sigma) predicted activity in the dorsal anterior cingulate gyrus (dACG), whereas for subjects with ADHD, activity in this region was related to improved no-go performance with age, but not to intra-individual variability. These data show that using more sophisticated measures of intra-individual variability allows the capturing of the dynamics of task performance and associated neural changes not permitted by more traditional measures. PMID:25610775

  1. Stable Lévy motion with inverse Gaussian subordinator

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Gajda, J.

    2017-09-01

    In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.

  2. Non-Markovian dynamics of single- and two-qubit systems interacting with Gaussian and non-Gaussian fluctuating transverse environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Matteo A. C., E-mail: matteo.rossi@unimi.it; Paris, Matteo G. A., E-mail: matteo.paris@fisica.unimi.it; CNISM, Unità Milano Statale, I-20133 Milano

    2016-01-14

    We address the interaction of single- and two-qubit systems with an external transverse fluctuating field and analyze in detail the dynamical decoherence induced by Gaussian noise and random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent von Neumann equation, we analyze in detail the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e., for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where themore » two noises lead to very similar dynamics. We show that the effects of RTN noise and of Gaussian noise are different, i.e., the spectrum alone is not enough to summarize the noise effects, but the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.« less

  3. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  4. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  5. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  6. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  7. A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.

    PubMed

    Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José

    2016-08-01

    Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.

  8. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    NASA Astrophysics Data System (ADS)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.

  9. Intra-individual reaction time variability based on ex-Gaussian distribution as a potential endophenotype for attention-deficit/hyperactivity disorder.

    PubMed

    Lin, H-Y; Hwang-Gu, S-L; Gau, S S-F

    2015-07-01

    Intra-individual variability in reaction time (IIV-RT), defined by standard deviation of RT (RTSD), is considered as an endophenotype for attention-deficit/hyperactivity disorder (ADHD). Ex-Gaussian distributions of RT, rather than RTSD, could better characterize moment-to-moment fluctuations in neuropsychological performance. However, data of response variability based on ex-Gaussian parameters as an endophenotypic candidate for ADHD are lacking. We assessed 411 adolescents with clinically diagnosed ADHD based on the DSM-IV-TR criteria as probands, 138 unaffected siblings, and 138 healthy controls. The output parameters, mu, sigma, and tau, of an ex-Gaussian RT distribution were derived from the Conners' continuous performance test. Multi-level models controlling for sex, age, comorbidity, and use of methylphenidate were applied. Compared with unaffected siblings and controls, ADHD probands had elevated sigma value, omissions, commissions, and mean RT. Unaffected siblings formed an intermediate group in-between probands and controls in terms of tau value and RTSD. There was no between-group difference in mu value. Conforming to a context-dependent nature, unaffected siblings still had an intermediate tau value in-between probands and controls across different interstimulus intervals. Our findings suggest IIV-RT represented by tau may be a potential endophenotype for inquiry into genetic underpinnings of ADHD in the context of heterogeneity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  11. The statistical mechanics of relativistic orbits around a massive black hole

    NASA Astrophysics Data System (ADS)

    Bar-Or, Ben; Alexander, Tal

    2014-12-01

    Stars around a massive black hole (MBH) move on nearly fixed Keplerian orbits, in a centrally-dominated potential. The random fluctuations of the discrete stellar background cause small potential perturbations, which accelerate the evolution of orbital angular momentum by resonant relaxation. This drives many phenomena near MBHs, such as extreme mass-ratio gravitational wave inspirals, the warping of accretion disks, and the formation of exotic stellar populations. We present here a formal statistical mechanics framework to analyze such systems, where the background potential is described as a correlated Gaussian noise. We derive the leading order, phase-averaged 3D stochastic Hamiltonian equations of motion, for evolving the orbital elements of a test star, and obtain the effective Fokker-Planck equation for a general correlated Gaussian noise, for evolving the stellar distribution function. We show that the evolution of angular momentum depends critically on the temporal smoothness of the background potential fluctuations. Smooth noise has a maximal variability frequency {{ν }max }. We show that in the presence of such noise, the evolution of the normalized angular momentum j=\\sqrt{1-{{e}2}} of a relativistic test star, undergoing Schwarzschild (in-plane) general relativistic precession with frequency {{ν }GR}/{{j}2}, is exponentially suppressed for j\\lt {{j}b}, where {{ν }GR}/jb2˜ {{ν }max }, due to the adiabatic invariance of the precession against the slowly varying random background torques. This results in an effective Schwarzschild precession-induced barrier in angular momentum. When jb is large enough, this barrier can have significant dynamical implications for processes near the MBH.

  12. Linear Space-Variant Image Restoration of Photon-Limited Images

    DTIC Science & Technology

    1978-03-01

    levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear

  13. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  14. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  15. Eigenvalues of Random Matrices with Isotropic Gaussian Noise and the Design of Diffusion Tensor Imaging Experiments.

    PubMed

    Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J

    2017-01-01

    Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.

  16. Eigenvalues of Random Matrices with Isotropic Gaussian Noise and the Design of Diffusion Tensor Imaging Experiments*

    PubMed Central

    Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.

    2017-01-01

    Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561

  17. Quantum key distribution using continuous-variable non-Gaussian states

    NASA Astrophysics Data System (ADS)

    Borelli, L. F. M.; Aguiar, L. S.; Roversi, J. A.; Vidiella-Barranco, A.

    2016-02-01

    In this work, we present a quantum key distribution protocol using continuous-variable non-Gaussian states, homodyne detection and post-selection. The employed signal states are the photon added then subtracted coherent states (PASCS) in which one photon is added and subsequently one photon is subtracted from the field. We analyze the performance of our protocol, compared with a coherent state-based protocol, for two different attacks that could be carried out by the eavesdropper (Eve). We calculate the secret key rate transmission in a lossy line for a superior channel (beam-splitter) attack, and we show that we may increase the secret key generation rate by using the non-Gaussian PASCS rather than coherent states. We also consider the simultaneous quadrature measurement (intercept-resend) attack, and we show that the efficiency of Eve's attack is substantially reduced if PASCS are used as signal states.

  18. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  19. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  20. Gaussian geometric discord in terms of Hellinger distance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suciu, Serban, E-mail: serban.suciu@theory.nipne.ro; Isar, Aurelian

    2015-12-07

    In the framework of the theory of open systems based on completely positive quantum dynamical semigroups, we address the quantification of general non-classical correlations in Gaussian states of continuous variable systems from a geometric perspective. We give a description of the Gaussian geometric discord by using the Hellinger distance as a measure for quantum correlations between two non-interacting non-resonant bosonic modes embedded in a thermal environment. We evaluate the Gaussian geometric discord by taking two-mode squeezed thermal states as initial states of the system and show that it has finite values between 0 and 1 and that it decays asymptoticallymore » to zero in time under the effect of the thermal bath.« less

  1. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  2. Working memory and intraindividual variability as neurocognitive indicators in ADHD: examining competing model predictions.

    PubMed

    Kofler, Michael J; Alderson, R Matt; Raiker, Joseph S; Bolden, Jennifer; Sarver, Dustin E; Rapport, Mark D

    2014-05-01

    The current study examined competing predictions of the default mode, cognitive neuroenergetic, and functional working memory models of attention-deficit/hyperactivity disorder (ADHD) regarding the relation between neurocognitive impairments in working memory and intraindividual variability. Twenty-two children with ADHD and 15 typically developing children were assessed on multiple tasks measuring intraindividual reaction time (RT) variability (ex-Gaussian: tau, sigma) and central executive (CE) working memory. Latent factor scores based on multiple, counterbalanced tasks were created for each construct of interest (CE, tau, sigma) to reflect reliable variance associated with each construct and remove task-specific, test-retest, and random error. Bias-corrected, bootstrapped mediation analyses revealed that CE working memory accounted for 88% to 100% of ADHD-related RT variability across models, and between-group differences in RT variability were no longer detectable after accounting for the mediating role of CE working memory. In contrast, RT variability accounted for 10% to 29% of between-group differences in CE working memory, and large magnitude CE working memory deficits remained after accounting for this partial mediation. Statistical comparison of effect size estimates across models suggests directionality of effects, such that the mediation effects of CE working memory on RT variability were significantly greater than the mediation effects of RT variability on CE working memory. The current findings question the role of RT variability as a primary neurocognitive indicator in ADHD and suggest that ADHD-related RT variability may be secondary to underlying deficits in CE working memory.

  3. Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2015-11-01

    A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  4. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  5. Cluster mass inference via random field theory.

    PubMed

    Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D

    2009-01-01

    Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.

  6. Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane

    NASA Astrophysics Data System (ADS)

    He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.

    2016-05-01

    The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.

  7. Recurrence plots of discrete-time Gaussian stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick

    2016-09-01

    We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.

  8. The Effect of a Non-Gaussian Random Loading on High-Cycle Fatigue of a Thermally Post-Buckled Structure

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Behnke, marlana N.; Przekop, Adam

    2010-01-01

    High-cycle fatigue of an elastic-plastic beam structure under the combined action of thermal and high-intensity non-Gaussian acoustic loadings is considered. Such loadings can be highly damaging when snap-through motion occurs between thermally post-buckled equilibria. The simulated non-Gaussian loadings investigated have a range of skewness and kurtosis typical of turbulent boundary layer pressure fluctuations in the vicinity of forward facing steps. Further, the duration and steadiness of high excursion peaks is comparable to that found in such turbulent boundary layer data. Response and fatigue life estimates are found to be insensitive to the loading distribution, with the minor exception of cases involving plastic deformation. In contrast, the fatigue life estimate was found to be highly affected by a different type of non-Gaussian loading having bursts of high excursion peaks.

  9. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  10. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  11. Crossover between the Gaussian orthogonal ensemble, the Gaussian unitary ensemble, and Poissonian statistics.

    PubMed

    Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter

    2017-11-01

    Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.

  12. Estimation of representative elementary volume for DNAPL saturation and DNAPL-water interfacial areas in 2D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun

    2017-06-01

    Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.

  13. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  14. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  15. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions.

    PubMed

    Brown, James; Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  16. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.

    PubMed

    Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin

    2014-01-01

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.

  17. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  18. Modeling and Simulation of Linear and Nonlinear MEMS Scale Electromagnetic Energy Harvesters for Random Vibration Environments

    PubMed Central

    Sassani, Farrokh

    2014-01-01

    The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063

  19. Non-Gaussian microwave background fluctuations from nonlinear gravitational effects

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.; Kunstatter, G. (Editor)

    1991-01-01

    Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.

  20. State estimation and prediction using clustered particle filters.

    PubMed

    Lee, Yoonsang; Majda, Andrew J

    2016-12-20

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.

  1. State estimation and prediction using clustered particle filters

    PubMed Central

    Lee, Yoonsang; Majda, Andrew J.

    2016-01-01

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332

  2. Scale-invariant puddles in graphene: Geometric properties of electron-hole distribution at the Dirac point.

    PubMed

    Najafi, M N; Nezhadhaghighi, M Ghasemi

    2017-03-01

    We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.

  3. Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection

    PubMed Central

    Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.

    2013-01-01

    We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670

  4. The partition function of the Bures ensemble as the τ-function of BKP and DKP hierarchies: continuous and discrete

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Li, Shi-Hao

    2017-07-01

    The relationship between matrix integrals and integrable systems was revealed more than 20 years ago. As is known, matrix integrals over a Gaussian ensemble used in random matrix theory could act as the τ-function of several hierarchies of integrable systems. In this article, we will show that the time-dependent partition function of the Bures ensemble, whose measure has many interesting geometric properties, could act as the τ-function of BKP and DKP hierarchies. In addition, if discrete time variables are introduced, then this partition function could act as the τ-function of discrete BKP and DKP hierarchies. In particular, there are some links between the partition function of the Bures ensemble and Toda-type equations.

  5. Gaussian-based techniques for quantum propagation from the time-dependent variational principle: Formulation in terms of trajectories of coupled classical and quantum variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalashilin, Dmitrii V.; Burghardt, Irene

    2008-08-28

    In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less

  6. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  7. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  8. Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes

    NASA Astrophysics Data System (ADS)

    Bravi, B.; Sollich, P.

    2017-06-01

    We consider the problem of a subnetwork of observed nodes embedded into a larger bulk of unknown (i.e. hidden) nodes, where the aim is to infer these hidden states given information about the subnetwork dynamics. The biochemical networks underlying many cellular and metabolic processes are important realizations of such a scenario as typically one is interested in reconstructing the time evolution of unobserved chemical concentrations starting from the experimentally more accessible ones. We present an application to this problem of a novel dynamical mean field approximation, the extended Plefka expansion, which is based on a path integral description of the stochastic dynamics. As a paradigmatic model we study the stochastic linear dynamics of continuous degrees of freedom interacting via random Gaussian couplings. The resulting joint distribution is known to be Gaussian and this allows us to fully characterize the posterior statistics of the hidden nodes. In particular the equal-time hidden-to-hidden variance—conditioned on observations—gives the expected error at each node when the hidden time courses are predicted based on the observations. We assess the accuracy of the extended Plefka expansion in predicting these single node variances as well as error correlations over time, focussing on the role of the system size and the number of observed nodes.

  9. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  10. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  11. Neural substrates of behavioral variability in attention deficit hyperactivity disorder: based on ex-Gaussian reaction time distribution and diffusion spectrum imaging tractography.

    PubMed

    Lin, H-Y; Gau, S S-F; Huang-Gu, S L; Shang, C-Y; Wu, Y-H; Tseng, W-Y I

    2014-06-01

    Increased intra-individual variability (IIV) in reaction time (RT) across various tasks is one ubiquitous neuropsychological finding in attention deficit hyperactivity disorder (ADHD). However, neurobiological underpinnings of IIV in individuals with ADHD have not yet been fully delineated. The ex-Gaussian distribution has been proved to capture IIV in RT. The authors explored the three parameters [μ (mu), σ (sigma), τ (tau)] of an ex-Gaussian RT distribution derived from the Conners' continuous performance test (CCPT) and their correlations with the microstructural integrity of the frontostriatal-caudate tracts and the cingulum bundles. We assessed 28 youths with ADHD (8-17 years; 25 males) and 28 age-, sex-, IQ- and handedness-matched typically developing (TD) youths using the CCPT, Wechsler Intelligence Scale for Children, 3rd edition and magnetic resonance imaging (MRI). Microstructural integrity, indexed by generalized fractional anisotropy (GFA), was measured by diffusion spectrum imaging tractrography on a 3-T MRI system. Youths with ADHD had larger σ (s.d. of Gaussian distribution) and τ (mean of exponential distribution) and reduced GFA in four bilateral frontostriatal tracts. With increased inter-stimulus intervals of CCPT, the magnitude of greater τ in ADHD than TD increased. In ADHD youths, the cingulum bundles and frontostriatal integrity were associated with three ex-Gaussian parameters and with μ (mean of Gaussian distribution) and τ, respectively; while only frontostriatal GFA was associated with μ and τ in TD youths. Our findings suggest the crucial role of the integrity of the cingulum bundles in accounting for IIV in ADHD. Involvement of different brain systems in mediating IIV may relate to a distinctive pathophysiological processing and/or adaptive compensatory mechanism.

  12. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-01

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  13. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field.

    PubMed

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-28

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  14. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  15. Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.

    2009-12-01

    Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.

  16. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  17. Gaussian-modulated coherent-state measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei

    2014-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.

  18. Collective attacks and unconditional security in continuous variable quantum key distribution.

    PubMed

    Grosshans, Frédéric

    2005-01-21

    We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et al. to obtain unconditionally secure rates for these protocols.

  19. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  20. Continuous-variable measurement-device-independent quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong

    2018-04-01

    The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.

  1. Direct test of the Gaussian auxiliary field ansatz in nonconserved order parameter phase ordering dynamics

    NASA Astrophysics Data System (ADS)

    Yeung, Chuck

    2018-06-01

    The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.

  2. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  3. Porous media flux sensitivity to pore-scale geostatistics: A bottom-up approach

    NASA Astrophysics Data System (ADS)

    Di Palma, P. R.; Guyennon, N.; Heße, F.; Romano, E.

    2017-04-01

    Macroscopic properties of flow through porous media can be directly computed by solving the Navier-Stokes equations at the scales related to the actual flow processes, while considering the porous structures in an explicit way. The aim of this paper is to investigate the effects of the pore-scale spatial distribution on seepage velocity through numerical simulations of 3D fluid flow performed by the lattice Boltzmann method. To this end, we generate multiple random Gaussian fields whose spatial correlation follows an assigned semi-variogram function. The Exponential and Gaussian semi-variograms are chosen as extreme-cases of correlation for short distances and statistical properties of the resulting porous media (indicator field) are described using the Matèrn covariance model, with characteristic lengths of spatial autocorrelation (pore size) varying from 2% to 13% of the linear domain. To consider the sensitivity of the modeling results to the geostatistical representativeness of the domain as well as to the adopted resolution, porous media have been generated repetitively with re-initialized random seeds and three different resolutions have been tested for each resulting realization. The main difference among results is observed between the two adopted semi-variograms, indicating that the roughness (short distances autocorrelation) is the property mainly affecting the flux. However, computed seepage velocities show additionally a wide variability (about three orders of magnitude) for each semi-variogram model in relation to the assigned correlation length, corresponding to pore sizes. The spatial resolution affects more the results for short correlation lengths (i.e., small pore sizes), resulting in an increasing underestimation of the seepage velocity with the decreasing correlation length. On the other hand, results show an increasing uncertainty as the correlation length approaches the domain size.

  4. Chaos and random matrices in supersymmetric SYK

    NASA Astrophysics Data System (ADS)

    Hunter-Jones, Nicholas; Liu, Junyu

    2018-05-01

    We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.

  5. Statistical Analyses of Brain Surfaces Using Gaussian Random Fields on 2-D Manifolds

    PubMed Central

    Staib, Lawrence H.; Xu, Dongrong; Zhu, Hongtu; Peterson, Bradley S.

    2008-01-01

    Interest in the morphometric analysis of the brain and its subregions has recently intensified because growth or degeneration of the brain in health or illness affects not only the volume but also the shape of cortical and subcortical brain regions, and new image processing techniques permit detection of small and highly localized perturbations in shape or localized volume, with remarkable precision. An appropriate statistical representation of the shape of a brain region is essential, however, for detecting, localizing, and interpreting variability in its surface contour and for identifying differences in volume of the underlying tissue that produce that variability across individuals and groups of individuals. Our statistical representation of the shape of a brain region is defined by a reference region for that region and by a Gaussian random field (GRF) that is defined across the entire surface of the region. We first select a reference region from a set of segmented brain images of healthy individuals. The GRF is then estimated as the signed Euclidean distances between points on the surface of the reference region and the corresponding points on the corresponding region in images of brains that have been coregistered to the reference. Correspondences between points on these surfaces are defined through deformations of each region of a brain into the coordinate space of the reference region using the principles of fluid dynamics. The warped, coregistered region of each subject is then unwarped into its native space, simultaneously bringing into that space the map of corresponding points that was established when the surfaces of the subject and reference regions were tightly coregistered. The proposed statistical description of the shape of surface contours makes no assumptions, other than smoothness, about the shape of the region or its GRF. The description also allows for the detection and localization of statistically significant differences in the shapes of the surfaces across groups of subjects at both a fine and coarse scale. We demonstrate the effectiveness of these statistical methods by applying them to study differences in shape of the amygdala and hippocampus in a large sample of normal subjects and in subjects with attention deficit/hyperactivity disorder (ADHD). PMID:17243583

  6. φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations

    NASA Astrophysics Data System (ADS)

    Sornette, D.; Simonetti, P.; Andersen, J. V.

    2000-08-01

    Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive empirical tests are presented on the foreign exchange market that validate satisfactorily the theory. For “fat tail” distributions, we show that an adequate prediction of the risks of a portfolio relies much more on the correct description of the tail structure rather than on their correlations. For the case of asymmetric return distributions, our theory allows us to generalize the return-risk efficient frontier concept to incorporate the dimensions of large risks embedded in the tail of the asset distributions. We demonstrate that it is often possible to increase the portfolio return while decreasing the large risks as quantified by the fourth and higher-order cumulants. Exact theoretical formulas are validated by empirical tests.

  7. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  8. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  9. The Topology of Large-Scale Structure in the 1.2 Jy IRAS Redshift Survey

    NASA Astrophysics Data System (ADS)

    Protogeros, Zacharias A. M.; Weinberg, David H.

    1997-11-01

    We measure the topology (genus) of isodensity contour surfaces in volume-limited subsets of the 1.2 Jy IRAS redshift survey, for smoothing scales λ = 4, 7, and 12 h-1 Mpc. At 12 h-1 Mpc, the observed genus curve has a symmetric form similar to that predicted for a Gaussian random field. At the shorter smoothing lengths, the observed genus curve shows a modest shift in the direction of an isolated cluster or ``meatball'' topology. We use mock catalogs drawn from cosmological N-body simulations to investigate the systematic biases that affect topology measurements in samples of this size and to determine the full covariance matrix of the expected random errors. We incorporate the error correlations into our evaluations of theoretical models, obtaining both frequentist assessments of absolute goodness of fit and Bayesian assessments of models' relative likelihoods. We compare the observed topology of the 1.2 Jy survey to the predictions of dynamically evolved, unbiased, gravitational instability models that have Gaussian initial conditions. The model with an n = -1 power-law initial power spectrum achieves the best overall agreement with the data, though models with a low-density cold dark matter power spectrum and an n = 0 power-law spectrum are also consistent. The observed topology is inconsistent with an initially Gaussian model that has n = -2, and it is strongly inconsistent with a Voronoi foam model, which has a non-Gaussian, bubble topology.

  10. Variability of non-Gaussian diffusion MRI and intravoxel incoherent motion (IVIM) measurements in the breast.

    PubMed

    Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori

    2018-01-01

    We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0-2500 s/mm2 with one number of excitations [NEX]) and five b-values (0-2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions.

  11. Variability of non-Gaussian diffusion MRI and intravoxel incoherent motion (IVIM) measurements in the breast

    PubMed Central

    Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori

    2018-01-01

    We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0–2500 s/mm2 with one number of excitations [NEX]) and five b-values (0–2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions. PMID:29494639

  12. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Ionospheric scintillation by a random phase screen Spectral approach

    NASA Technical Reports Server (NTRS)

    Rufenach, C. L.

    1975-01-01

    The theory developed by Briggs and Parkin, given in terms of an anisotropic gaussian correlation function, is extended to a spectral description specified as a continuous function of spatial wavenumber with an intrinsic outer scale as would be expected from a turbulent medium. Two spectral forms were selected for comparison: (1) a power-law variation in wavenumber with a constant three-dimensional index equal to 4, and (2) Gaussian spectral variation. The results are applied to the F-region ionosphere with an outer-scale wavenumber of 2 per km (approximately equal to the Fresnel wavenumber) for the power-law variation, and 0.2 per km for the Gaussian spectral variation. The power-law form with a small outer-scale wavenumber is consistent with recent F-region in-situ measurements, whereas the gaussian form is mathematically convenient and, hence, mostly used in the previous developments before the recent in-situ measurements. Some comparison with microwave scintillation in equatorial areas is made.

  14. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    PubMed

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  15. Gaussian variational ansatz in the problem of anomalous sea waves: Comparison with direct numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruban, V. P., E-mail: ruban@itp.ac.ru

    2015-05-15

    The nonlinear dynamics of an obliquely oriented wave packet on a sea surface is analyzed analytically and numerically for various initial parameters of the packet in relation to the problem of the so-called rogue waves. Within the Gaussian variational ansatz applied to the corresponding (1+2)-dimensional hyperbolic nonlinear Schrödinger equation (NLSE), a simplified Lagrangian system of differential equations is derived that describes the evolution of the coefficients of the real and imaginary quadratic forms appearing in the Gaussian. This model provides a semi-quantitative description of the process of nonlinear spatiotemporal focusing, which is one of the most probable mechanisms of roguemore » wave formation in random wave fields. The system of equations is integrated in quadratures, which allows one to better understand the qualitative differences between linear and nonlinear focusing regimes of a wave packet. Predictions of the Gaussian model are compared with the results of direct numerical simulation of fully nonlinear long-crested waves.« less

  16. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  17. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  18. Structured Spatial Modeling and Mapping of Domestic Violence Against Women of Reproductive Age in Rwanda.

    PubMed

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2018-03-01

    The main objective of this study was to assess the risk factors and spatial correlates of domestic violence against women of reproductive age in Rwanda. A structured spatial approach was used to account for the nonlinear nature of some covariates and the spatial variability on domestic violence. The nonlinear effect was modeled through second-order random walk, and the structured spatial effect was modeled through Gaussian Markov Random Fields specified as an intrinsic conditional autoregressive model. The data from the Rwanda Demographic and Health Survey 2014/2015 were used as an application. The findings of this study revealed that the risk factors of domestic violence against women are the wealth quintile of the household, the size of the household, the husband or partner's age, the husband or partner's level of education, ownership of the house, polygamy, the alcohol consumption status of the husband or partner, the woman's perception of wife-beating attitude, and the use of contraceptive methods. The study also highlighted the significant spatial variation of domestic violence against women at district level.

  19. Gate sequence for continuous variable one-way quantum computation

    PubMed Central

    Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi

    2013-01-01

    Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.

  20. Weak constrained localized ensemble transform Kalman filter for radar data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana; Lange, Heiner

    2015-04-01

    The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.

  1. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  2. Entanglement sensitivity to signal attenuation and amplification

    NASA Astrophysics Data System (ADS)

    Filippov, Sergey N.; Ziman, Mário

    2014-07-01

    We analyze general laws of continuous-variable entanglement dynamics during the deterministic attenuation and amplification of the physical signal carrying the entanglement. These processes are inevitably accompanied by noises, so we find fundamental limitations on noise intensities that destroy entanglement of Gaussian and non-Gaussian input states. The phase-insensitive amplification Φ1⊗Φ2⊗⋯ΦN with the power gain κi≥2 (≈3 dB, i =1,...,N) is shown to destroy entanglement of any N-mode Gaussian state even in the case of quantum-limited performance. In contrast, we demonstrate non-Gaussian states with the energy of a few photons such that their entanglement survives within a wide range of noises beyond quantum-limited performance for any degree of attenuation or gain. We detect entanglement preservation properties of the channel Φ1⊗Φ2, where each mode is deterministically attenuated or amplified. Gaussian states of high energy are shown to be robust to very asymmetric attenuations, whereas non-Gaussian states are at an advantage in the case of symmetric attenuation and general amplification. If Φ1=Φ2, the total noise should not exceed 1/2√κ2+1 to guarantee entanglement preservation.

  3. Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monras, Alex; Illuminati, Fabrizio

    2011-01-15

    We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less

  4. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  5. Moment Lyapunov Exponent and Stochastic Stability of Binary Airfoil under Combined Harmonic and Non-Gaussian Colored Noise Excitations

    NASA Astrophysics Data System (ADS)

    Hu, D. L.; Liu, X. B.

    Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.

  6. Diffraction study of duty-cycle error in ferroelectric quasi-phase-matching gratings with Gaussian beam illumination

    NASA Astrophysics Data System (ADS)

    Dwivedi, Prashant Povel; Kumar, Challa Sesha Sai Pavan; Choi, Hee Joo; Cha, Myoungsik

    2016-02-01

    Random duty-cycle error (RDE) is inherent in the fabrication of ferroelectric quasi-phase-matching (QPM) gratings. Although a small RDE may not affect the nonlinearity of QPM devices, it enhances non-phase-matched parasitic harmonic generations, limiting the device performance in some applications. Recently, we demonstrated a simple method for measuring the RDE in QPM gratings by analyzing the far-field diffraction pattern obtained by uniform illumination (Dwivedi et al. in Opt Express 21:30221-30226, 2013). In the present study, we used a Gaussian beam illumination for the diffraction experiment to measure noise spectra that are less affected by the pedestals of the strong diffraction orders. Our results were compared with our calculations based on a random grating model, demonstrating improved resolution in the RDE estimation.

  7. LETTER TO THE EDITOR: Phase transition in a random fragmentation problem with applications to computer science

    NASA Astrophysics Data System (ADS)

    Dean, David S.; Majumdar, Satya N.

    2002-08-01

    We study a fragmentation problem where an initial object of size x is broken into m random pieces provided x > x0 where x0 is an atomic cut-off. Subsequently, the fragmentation process continues for each of those daughter pieces whose sizes are bigger than x0. The process stops when all the fragments have sizes smaller than x0. We show that the fluctuation of the total number of splitting events, characterized by the variance, generically undergoes a nontrivial phase transition as one tunes the branching number m through a critical value m = mc. For m < mc, the fluctuations are Gaussian where as for m > mc they are anomalously large and non-Gaussian. We apply this general result to analyse two different search algorithms in computer science.

  8. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  9. Light-curve Instabilities of β Lyrae Observed by the BRITE Satellites

    NASA Astrophysics Data System (ADS)

    Rucinski, Slavek M.; Pigulski, Andrzej; Popowicz, Adam; Kuschnig, Rainer; Kozłowski, Szymon; Moffat, Anthony F. J.; Pavlovski, Krešimir; Handler, Gerald; Pablo, H.; Wade, G. A.; Weiss, Werner W.; Zwintz, Konstanze

    2018-07-01

    Photometric instabilities of β Lyrae (β Lyr) were observed in 2016 by two red-filter BRITE satellites over more than 10 revolutions of the binary, with ∼100 minute sampling. Analysis of the time series shows that flares or fading events take place typically three to five times per binary orbit. The amplitudes of the disturbances (relative to the mean light curve, in units of the maximum out-of-eclipse light flux, f.u.) are characterized by a Gaussian distribution with σ = 0.0130 ± 0.0004 f.u. Most of the disturbances appear to be random, with a tendency to remain for one or a few orbital revolutions, sometimes changing from brightening to fading or the reverse. Phases just preceding the center of the deeper eclipse showed the most scatter while phases around the secondary eclipse were the quietest. This implies that the invisible companion is the most likely source of the instabilities. Wavelet transform analysis showed the domination of the variability scales at phase intervals 0.05–0.3 (0.65–4 days), with the shorter (longer) scales dominating in numbers (variability power) in this range. The series can be well described as a stochastic Gaussian process with the signal at short timescales showing a slightly stronger correlation than red noise. The signal decorrelation timescale, τ = (0.068 ± 0.018) in phase or (0.88 ± 0.23) days, appears to follow the same dependence on the accretor mass as that observed for active galactic nucleus and quasi-stellar object masses five to nine orders of magnitude larger than the β Lyr torus-hidden component.

  10. Dynamic laser speckle analyzed considering inhomogeneities in the biological sample

    NASA Astrophysics Data System (ADS)

    Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico

    2017-04-01

    Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.

  11. Hopping Conduction in Polymers

    NASA Astrophysics Data System (ADS)

    Bässler, Heinz

    The concept of hopping within a Gaussian density of localized states introduced earlier to rationalize charge transport in random organic photoconductors is developed further to account for temporal features of time of flight (TOF) signals. At moderate degree of energetic disorder (σ/kT~3.5…4.5) there is a transport regime intermediate between dispersive and quasi-Gaussian type whose signatures are (i) universal TOF signals that can appear weakly dispersive despite yielding a well defined carrier mobility and (ii) an asymmetric propagator of the carrier packet yielding a time dependent diffusivity.

  12. Efficiency-enhanced photon sieve using Gaussian/overlapping distribution of pinholes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabatyan, A.; Mirzaie, S.

    2011-04-10

    A class of photon sieve is introduced whose structure is based on the overlapping pinholes in the innermost zones. This kind of distribution is produced by, for example, a particular form of Gaussian function. The focusing property of the proposed model was examined theoretically and experimentally. It is shown that under He-Ne laser and white light illumination, the focal spot size of this novel structure has considerably smaller FWHM than a photon sieve with randomly distributed pinholes and a Fresnel zone plate. In addition, secondary maxima have been suppressed effectively.

  13. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  14. Metastates in Mean-Field Models with Random External Fields Generated by Markov Chains

    NASA Astrophysics Data System (ADS)

    Formentin, M.; Külske, C.; Reichenbachs, A.

    2012-01-01

    We extend the construction by Külske and Iacobelli of metastates in finite-state mean-field models in independent disorder to situations where the local disorder terms are a sample of an external ergodic Markov chain in equilibrium. We show that for non-degenerate Markov chains, the structure of the theorems is analogous to the case of i.i.d. variables when the limiting weights in the metastate are expressed with the aid of a CLT for the occupation time measure of the chain. As a new phenomenon we also show in a Potts example that for a degenerate non-reversible chain this CLT approximation is not enough, and that the metastate can have less symmetry than the symmetry of the interaction and a Gaussian approximation of disorder fluctuations would suggest.

  15. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  16. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  17. Increased Intra-Individual Reaction Time Variability in Attention-Deficit/Hyperactivity Disorder across Response Inhibition Tasks with Different Cognitive Demands

    ERIC Educational Resources Information Center

    Vaurio, Rebecca G.; Simmonds, Daniel J.; Mostofsky, Stewart H.

    2009-01-01

    One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the…

  18. Fock expansion of multimode pure Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it

    2015-12-15

    The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less

  19. A Method for Determining the Nominal Occular Hazard Zone for Gaussian Beam Laser Rangers with a Firmware Controlled Variable Focal Length

    NASA Technical Reports Server (NTRS)

    Picco, C. E.; Shavers, M. R.; Victor, J. M.; Duron, J. L.; Bowers, W. h.; Gillis, D. B.; VanBaalen, M.

    2009-01-01

    LIDAR systems that maintain a constant beam spot size on a retroreflector in order to increase the accuracy of bearing and ranging data must use a software controlled variable position lens. These systems periodically update the estimated range and set the position of the focusing lens accordingly. In order to precisely calculate the r NOHD for such a system, the software method for setting the variable position lens and gaussian laser propagation can be used to calculate the irradiance at any point given the range estimation. NASA s Space Shuttle LIDAR, called the Trajectory Control Sensor (TCS), uses this configuration. Analytical tools were developed using Excel and VBA to determine the radiant energy to the International Space Station (ISS) crewmembers eyes while viewing the shuttle on approach and departure. Various viewing scenarios are considered including the use of through-the-lens imaging optics and the window transmissivity at the TCS wavelength. The methodology incorporates the TCS system control logic, gaussian laser propagation, potential failure mode end states, and guidance from American National Standard for the Safe Use of Lasers (ANSI Z136.1-2007). This approach can be adapted for laser safety analyses of similar LIDAR systems.

  20. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  1. On fatigue crack growth under random loading

    NASA Astrophysics Data System (ADS)

    Zhu, W. Q.; Lin, Y. K.; Lei, Y.

    1992-09-01

    A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.

  2. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  3. A phenomenological approach to modeling chemical dynamics in nonlinear and two-dimensional spectroscopy.

    PubMed

    Ramasesha, Krupa; De Marco, Luigi; Horning, Andrew D; Mandal, Aritra; Tokmakoff, Andrei

    2012-04-07

    We present an approach for calculating nonlinear spectroscopic observables, which overcomes the approximations inherent to current phenomenological models without requiring the computational cost of performing molecular dynamics simulations. The trajectory mapping method uses the semi-classical approximation to linear and nonlinear response functions, and calculates spectra from trajectories of the system's transition frequencies and transition dipole moments. It rests on identifying dynamical variables important to the problem, treating the dynamics of these variables stochastically, and then generating correlated trajectories of spectroscopic quantities by mapping from the dynamical variables. This approach allows one to describe non-Gaussian dynamics, correlated dynamics between variables of the system, and nonlinear relationships between spectroscopic variables of the system and the bath such as non-Condon effects. We illustrate the approach by applying it to three examples that are often not adequately treated by existing analytical models--the non-Condon effect in the nonlinear infrared spectra of water, non-Gaussian dynamics inherent to strongly hydrogen bonded systems, and chemical exchange processes in barrier crossing reactions. The methods described are generally applicable to nonlinear spectroscopy throughout the optical, infrared and terahertz regions.

  4. Genuine multipartite entanglement of symmetric Gaussian states: Strong monogamy, unitary localization, scaling behavior, and molecular sharing structure

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Illuminati, Fabrizio

    2008-10-01

    We investigate the structural aspects of genuine multipartite entanglement in Gaussian states of continuous variable systems. Generalizing the results of Adesso and Illuminati [Phys. Rev. Lett. 99, 150501 (2007)], we analyze whether the entanglement shared by blocks of modes distributes according to a strong monogamy law. This property, once established, allows us to quantify the genuine N -partite entanglement not encoded into 2,…,K,…,(N-1) -partite quantum correlations. Strong monogamy is numerically verified, and the explicit expression of the measure of residual genuine multipartite entanglement is analytically derived, by a recursive formula, for a subclass of Gaussian states. These are fully symmetric (permutation-invariant) states that are multipartitioned into blocks, each consisting of an arbitrarily assigned number of modes. We compute the genuine multipartite entanglement shared by the blocks of modes and investigate its scaling properties with the number and size of the blocks, the total number of modes, the global mixedness of the state, and the squeezed resources needed for state engineering. To achieve the exact computation of the block entanglement, we introduce and prove a general result of symplectic analysis: Correlations among K blocks in N -mode multisymmetric and multipartite Gaussian states, which are locally invariant under permutation of modes within each block, can be transformed by a local (with respect to the partition) unitary operation into correlations shared by K single modes, one per block, in effective nonsymmetric states where N-K modes are completely uncorrelated. Due to this theorem, the above results, such as the derivation of the explicit expression for the residual multipartite entanglement, its nonnegativity, and its scaling properties, extend to the subclass of non-symmetric Gaussian states that are obtained by the unitary localization of the multipartite entanglement of symmetric states. These findings provide strong numerical evidence that the distributed Gaussian entanglement is strongly monogamous under and possibly beyond specific symmetry constraints, and that the residual continuous-variable tangle is a proper measure of genuine multipartite entanglement for permutation-invariant Gaussian states under any multipartition of the modes.

  5. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  6. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    PubMed

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.

  7. Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams

    NASA Technical Reports Server (NTRS)

    Mackowski, Daniel W.; Mishchenko, Michael I.

    2011-01-01

    The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.

  8. Nuclear DNA contents of Echinchloa crus-galli and its Gaussian relationships with environments

    NASA Astrophysics Data System (ADS)

    Li, Dan-Dan; Lu, Yong-Liang; Guo, Shui-Liang; Yin, Li-Ping; Zhou, Ping; Lou, Yu-Xia

    2017-02-01

    Previous studies on plant nuclear DNA content variation and its relationships with environmental gradients produced conflicting results. We speculated that the relationships between nuclear DNA content of a widely-distributed species and its environmental gradients might be non-linear if it was sampled in a large geographical gradient. Echinochloa crus-galli (L.) P. Beauv. is a worldwide species, but without documents on its intraspecific variation of nuclear DNA content. Our objectives are: 1) to detect intraspecific variation scope of E. crus-galli in its nuclear DNA content, and 2) to testify whether nuclear DNA content of the species changes with environmental gradients following Gaussian models if its populations were sampled in a large geographical gradient. We collected seeds of 36 Chinese populations of E. crus-galli across a wide geographical gradient, and sowed them in a homogeneous field to get their offspring to determine their nuclear DNA content. We analyzed the relationships of nuclear DNA content of these populations with latitude, longitude, and nineteen bioclimatic variables by using Gaussian and linear models. (1) Nuclear DNA content varied from 2.113 to 2.410 pg among 36 Chinese populations of E. crus-galli, with a mean value of 2.256 pg. (2) Gaussian correlations of nuclear DNA content (y) with geographical gradients were detected, with latitude (x) following y = 2.2923*e -(x - 24.9360)2/2*63.79452 (r = 0.546, P < 0.001), and with longitude (x) following y = 2.2933*e -(x - 116.1801)2/2*44.74502 (r = 0.672, P < 0.001). (3) Among the nineteen bioclimatic variables, except temperature isothermality, precipitations of the wettest month, the wettest quarter and the warmest quarter, the others could be better fit with nuclear DNA content by using Gaussian models than by linear models. There exists intra-specific variation among 36 Chinese populations of E. crus-galli, Gaussian models could be applied to fit the correlations of its Nuclear DNA content with geographical and most bioclimatic gradients.

  9. Gravitational lensing by eigenvalue distributions of random matrix models

    NASA Astrophysics Data System (ADS)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  10. Symmetry Breaking in a random passive scalar

    NASA Astrophysics Data System (ADS)

    Kilic, Zeliha; McLaughlin, Richard; Camassa, Roberto

    2017-11-01

    We consider the evolution of a decaying passive scalar in the presence of a gaussian white noise fluctuating shear flow. We focus on deterministic initial data and establish the short, intermediate, and long time symmetry properties of the evolving point wise probability measure for the random passive scalar. Analytical results are compared directly to Monte Carlo simulations. Time permitting we will compare the predictions to experimental observations.

  11. Bayesian spatial transformation models with applications in neuroimaging data.

    PubMed

    Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G

    2013-12-01

    The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.

  12. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  13. Monogamy inequalities for certifiers of continuous-variable Einstein-Podolsky-Rosen entanglement without the assumption of Gaussianity

    NASA Astrophysics Data System (ADS)

    Rosales-Zárate, L.; Teh, R. Y.; Opanchuk, B.; Reid, M. D.

    2017-08-01

    We consider three modes A , B , and C and derive monogamy inequalities that constrain the distribution of bipartite continuous variable Einstein-Podolsky-Rosen entanglement amongst the three modes. The inequalities hold without the assumption of Gaussian states, and are based on measurements of the quadrature phase amplitudes Xi and Pi at each mode i =A ,B ,C . The first monogamy inequality involves the well-known quantity DI J defined by Duan-Giedke-Cirac-Zoller as the sum of the variances of (XI-XJ)/2 and (PI+PJ)/2 where [XI,PJ] =δI J . Entanglement between I and J is certified if DI J<1 . A second monogamy inequality involves the more general entanglement certifier EntIJ defined as the normalized product of the variances of XI-g XJ and PI+g PJ , where g is a real constant. The monogamy inequalities give a lower bound on the values of DB C and EntBC for one pair, given the values DB A and EntBA for the first pair. This lower bound changes in the absence of two-mode Gaussian steering of B . We illustrate for a range of tripartite entangled states, identifying regimes of saturation of the inequalities. The monogamy relations explain without the assumption of Gaussianity the experimentally observed saturation at DA B=0.5 where there is symmetry between modes A and C .

  14. Long-distance continuous-variable quantum key distribution using non-Gaussian state-discrimination detection

    NASA Astrophysics Data System (ADS)

    Liao, Qin; Guo, Ying; Huang, Duan; Huang, Peng; Zeng, Guihua

    2018-02-01

    We propose a long-distance continuous-variable quantum key distribution (CVQKD) with a four-state protocol using non-Gaussian state-discrimination detection. A photon subtraction operation, which is deployed at the transmitter, is used for splitting the signal required for generating the non-Gaussian operation to lengthen the maximum transmission distance of the CVQKD. Whereby an improved state-discrimination detector, which can be deemed as an optimized quantum measurement that allows the discrimination of nonorthogonal coherent states beating the standard quantum limit, is applied at the receiver to codetermine the measurement result with the conventional coherent detector. By tactfully exploiting the multiplexing technique, the resulting signals can be simultaneously transmitted through an untrusted quantum channel, and subsequently sent to the state-discrimination detector and coherent detector, respectively. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance up to hundreds of kilometers. Furthermore, by taking the finite-size effect and composable security into account we obtain the tightest bound of the secure distance, which is more practical than that obtained in the asymptotic limit.

  15. Screening and clustering of sparse regressions with finite non-Gaussian mixtures.

    PubMed

    Zhang, Jian

    2017-06-01

    This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.

  16. Continuous-variable entanglement distillation of non-Gaussian mixed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong Ruifang; Lassen, Mikael; Department of Physics, Technical University of Denmark, Building 309, DK-2800 Lyngby

    2010-07-15

    Many different quantum-information communication protocols such as teleportation, dense coding, and entanglement-based quantum key distribution are based on the faithful transmission of entanglement between distant location in an optical network. The distribution of entanglement in such a network is, however, hampered by loss and noise that is inherent in all practical quantum channels. Thus, to enable faithful transmission one must resort to the protocol of entanglement distillation. In this paper we present a detailed theoretical analysis and an experimental realization of continuous variable entanglement distillation in a channel that is inflicted by different kinds of non-Gaussian noise. The continuous variablemore » entangled states are generated by exploiting the third order nonlinearity in optical fibers, and the states are sent through a free-space laboratory channel in which the losses are altered to simulate a free-space atmospheric channel with varying losses. We use linear optical components, homodyne measurements, and classical communication to distill the entanglement, and we find that by using this method the entanglement can be probabilistically increased for some specific non-Gaussian noise channels.« less

  17. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  18. Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective

    NASA Astrophysics Data System (ADS)

    Jamali, Tayeb; Jafari, G. R.

    2015-07-01

    We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.

  19. Almost sure convergence in quantum spin glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu

    2015-12-15

    Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less

  20. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  1. Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models

    USGS Publications Warehouse

    Geist, Eric L.

    2016-01-01

    Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.

  2. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  3. Anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field: two-loop approximation.

    PubMed

    Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L

    2005-01-01

    The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.

  4. Fixing convergence of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Bickson, Danny; Dolev, Danny

    Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less

  5. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  6. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  7. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  8. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  9. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  10. Statistics and topology of the COBE differential microwave radiometer first-year sky maps

    NASA Technical Reports Server (NTRS)

    Smoot, G. F.; Tenorio, L.; Banday, A. J.; Kogut, A.; Wright, E. L.; Hinshaw, G.; Bennett, C. L.

    1994-01-01

    We use statistical and topological quantities to test the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) first-year sky maps against the hypothesis that the observed temperature fluctuations reflect Gaussian initial density perturbations with random phases. Recent papers discuss specific quantities as discriminators between Gaussian and non-Gaussian behavior, but the treatment of instrumental noise on the data is largely ignored. The presence of noise in the data biases many statistical quantities in a manner dependent on both the noise properties and the unknown cosmic microwave background temperature field. Appropriate weighting schemes can minimize this effect, but it cannot be completely eliminated. Analytic expressions are presented for these biases, and Monte Carlo simulations are used to assess the best strategy for determining cosmologically interesting information from noisy data. The genus is a robust discriminator that can be used to estimate the power-law quadrupole-normalized amplitude, Q(sub rms-PS), independently of the two-point correlation function. The genus of the DMR data is consistent with Gaussian initial fluctuations with Q(sub rms-PS) = (15.7 +/- 2.2) - (6.6 +/- 0.3)(n - 1) micro-K, where n is the power-law index. Fitting the rms temperature variations at various smoothing angles gives Q(sub rms-PS) = 13.2 +/- 2.5 micro-K and n = 1.7(sup (+0.3) sub (-0.6)). While consistent with Gaussian fluctuations, the first year data are only sufficient to rule out strongly non-Gaussian distributions of fluctuations.

  11. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong Yuli; Zou Xubo; Guo Guangcan

    We investigate the economical Gaussian cloning of coherent states with the known phase, which produces M copies from N input replica and can be implemented with degenerate parametric amplifiers and beam splitters.The achievable fidelity of single copy is given by 2M{radical}(N)/[{radical}(N)(M-1)+{radical}((1+N)(M{sup 2}+N))], which is bigger than the optimal fidelity of the universal Gaussian cloning. The cloning machine presented here works without ancillary optical modes and can be regarded as the continuous variable generalization of the economical cloning machine for qudits.

  13. Demonstration of coherent-state discrimination using a displacement-controlled photon-number-resolving detector.

    PubMed

    Wittmann, Christoffer; Andersen, Ulrik L; Takeoka, Masahiro; Sych, Denis; Leuchs, Gerd

    2010-03-12

    We experimentally demonstrate a new measurement scheme for the discrimination of two coherent states. The measurement scheme is based on a displacement operation followed by a photon-number-resolving detector, and we show that it outperforms the standard homodyne detector which we, in addition, prove to be optimal within all Gaussian operations including conditional dynamics. We also show that the non-Gaussian detector is superior to the homodyne detector in a continuous variable quantum key distribution scheme.

  14. Simultaneous classical communication and quantum key distribution using continuous variables*

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  15. Performance analysis of OOK-based FSO systems in Gamma-Gamma turbulence with imprecise channel models

    NASA Astrophysics Data System (ADS)

    Feng, Jianfeng; Zhao, Xiaohui

    2017-11-01

    For an FSO communication system with imprecise channel model, we investigate its system performance based on outage probability, average BEP and ergodic capacity. The exact FSO links are modeled as Gamma-Gamma fading channel in consideration of both atmospheric turbulence and pointing errors, and the imprecise channel model is treated as the superposition of exact channel gain and a Gaussian random variable. After we derive the PDF, CDF and nth moment of the imprecise channel gain, and based on these statistics the expressions for the outage probability, the average BEP and the ergodic capacity in terms of the Meijer's G functions are obtained. Both numerical and analytical results are presented. The simulation results show that the communication performance deteriorates in the imprecise channel model, and approaches to the exact performance curves as the channel model becomes accurate.

  16. Heterodyne efficiency for a coherent laser radar with diffuse or aerosol targets

    NASA Technical Reports Server (NTRS)

    Frehlich, R. G.

    1993-01-01

    The performance of a Coherent Laser Radar is determined by the statistics of the coherent Doppler signal. The heterodyne efficiency is an excellent indication of performance because it is an absolute measure of beam alignment and is independent of the transmitter power, the target backscatter coefficient, the atmospheric attenuation, and the detector quantum efficiency and gain. The theoretical calculation of heterodyne efficiency for an optimal monostatic lidar with a circular aperture and Gaussian transmit laser is presented including beam misalignment in the far-field and near-field regimes. The statistical behavior of estimates of the heterodyne efficiency using a calibration hard target are considered. For space based applications, a biased estimate of heterodyne efficiency is proposed that removes the variability due to the random surface return but retains the sensitivity to misalignment. Physical insight is provided by simulation of the fields on the detector surface. The required detector calibration is also discussed.

  17. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  18. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  19. Damage/fault diagnosis in an operating wind turbine under uncertainty via a vibration response Gaussian mixture random coefficient model based framework

    NASA Astrophysics Data System (ADS)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-07-01

    The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.

  20. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  1. Random pure states: Quantifying bipartite entanglement beyond the linear statistics.

    PubMed

    Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb

    2016-05-01

    We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.

  2. Unitarily localizable entanglement of Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serafini, Alessio; Adesso, Gerardo; Illuminati, Fabrizio

    2005-03-01

    We consider generic (mxn)-mode bipartitions of continuous-variable systems, and study the associated bisymmetric multimode Gaussian states. They are defined as (m+n)-mode Gaussian states invariant under local mode permutations on the m-mode and n-mode subsystems. We prove that such states are equivalent, under local unitary transformations, to the tensor product of a two-mode state and of m+n-2 uncorrelated single-mode states. The entanglement between the m-mode and the n-mode blocks can then be completely concentrated on a single pair of modes by means of local unitary operations alone. This result allows us to prove that the PPT (positivity of the partial transpose)more » condition is necessary and sufficient for the separability of (m+n)-mode bisymmetric Gaussian states. We determine exactly their negativity and identify a subset of bisymmetric states whose multimode entanglement of formation can be computed analytically. We consider explicit examples of pure and mixed bisymmetric states and study their entanglement scaling with the number of modes.« less

  3. Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.

    PubMed

    Minin, Serge; Kamalabadi, Farzad

    2009-12-20

    We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.

  4. Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks.

    PubMed

    Chabanon, Morgan; Rangamani, Padmini

    2018-03-28

    Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.

  5. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  6. Unsupervised classification of multivariate geostatistical data: Two algorithms

    NASA Astrophysics Data System (ADS)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  7. Determination of continuous variable entanglement by purity measurements.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-02-27

    We classify the entanglement of two-mode Gaussian states according to their degree of total and partial mixedness. We derive exact bounds that determine maximally and minimally entangled states for fixed global and marginal purities. This characterization allows for an experimentally reliable estimate of continuous variable entanglement based on measurements of purity.

  8. Proactive Control Processes in Event-Based Prospective Memory: Evidence from Intraindividual Variability and Ex-Gaussian Analyses

    ERIC Educational Resources Information Center

    Ball, B. Hunter; Brewer, Gene A.

    2018-01-01

    The present study implemented an individual differences approach in conjunction with response time (RT) variability and distribution modeling techniques to better characterize the cognitive control dynamics underlying ongoing task cost (i.e., slowing) and cue detection in event-based prospective memory (PM). Three experiments assessed the relation…

  9. The modelling of carbon-based supercapacitors: Distributions of time constants and Pascal Equivalent Circuits

    NASA Astrophysics Data System (ADS)

    Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon

    2017-03-01

    Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.

  10. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  11. Spectral statistics of random geometric graphs

    NASA Astrophysics Data System (ADS)

    Dettmann, C. P.; Georgiou, O.; Knight, G.

    2017-04-01

    We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.

  12. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.

    2014-12-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M {sub stellar} > 10{sup 11.56} M {sub ☉}. We study the topology at two smoothing lengths: R {sub G} = 21 h {sup –1} Mpc and R {sub G} = 34 h {sup –1} Mpc. The genus topology studied at the R {sub G} = 21 h {sup –1} Mpc scale results in the highest genusmore » amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.« less

  13. Theory of Genuine Tripartite Nonlocality of Gaussian States

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Piano, Samanta

    2014-01-01

    We investigate the genuine multipartite nonlocality of three-mode Gaussian states of continuous variable systems. For pure states, we present a simplified procedure to obtain the maximum violation of the Svetlichny inequality based on displaced parity measurements, and we analyze its interplay with genuine tripartite entanglement measured via Rényi-2 entropy. The maximum Svetlichny violation admits tight upper and lower bounds at fixed tripartite entanglement. For mixed states, no violation is possible when the purity falls below 0.86. We also explore a set of recently derived weaker inequalities for three-way nonlocality, finding violations for all tested pure states. Our results provide a strong signature for the nonclassical and nonlocal nature of Gaussian states despite their positive Wigner function, and lead to precise recipes for its experimental verification.

  14. Relativistic diffusive motion in random electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Haba, Z.

    2011-08-01

    We show that the relativistic dynamics in a Gaussian random electromagnetic field can be approximated by the relativistic diffusion of Schay and Dudley. Lorentz invariant dynamics in the proper time leads to the diffusion in the proper time. The dynamics in the laboratory time gives the diffusive transport equation corresponding to the Jüttner equilibrium at the inverse temperature β-1 = mc2. The diffusion constant is expressed by the field strength correlation function (Kubo's formula).

  15. Analytical and Experimental Random Vibration of Nonlinear Aeroelastic Structures.

    DTIC Science & Technology

    1987-01-28

    firstorder differential equations. In view of the system complexi- ty an attempt s made to close the infinite hierarchy by using a Gaussian scheme. This sc...year of this project-. When the first normal mode is externally excited by a band-limited random excitation, the system mean square response is found...governed mainly by the internal detuning parameter and the system damping ratios. The results are completely different when the second normal mode is

  16. Coherent pulse position modulation quantum cipher

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohma, Masaki; Hirota, Osamu

    2014-12-04

    On the basis of fundamental idea of Yuen, we present a new type of quantum random cipher, where pulse position modulated signals are encrypted in the picture of quantum Gaussian wave form. We discuss the security of our proposed system with a phase mask encryption.

  17. Planckian Information (Ip): A New Measure of Order in Atoms, Enzymes, Cells, Brains, Human Societies, and the Cosmos

    NASA Astrophysics Data System (ADS)

    Ji, Sungchul

    A new mathematical formula referred to as the Planckian distribution equation (PDE) has been found to fit long-tailed histograms generated in various fields of studies, ranging from atomic physics to single-molecule enzymology, cell biology, brain neurobiology, glottometrics, econophysics, and to cosmology. PDE can be derived from a Gaussian-like equation (GLE) by non-linearly transforming its variable, x, while keeping the y coordinate constant. Assuming that GLE represents a random distribution (due to its symmetry), it is possible to define a binary logarithm of the ratio between the areas under the curves of PDE and GLE as a measure of the non-randomness (or order) underlying the biophysicochemical processes generating long-tailed histograms that fit PDE. This new function has been named the Planckian information, IP, which (i) may be a new measure of order that can be applied widely to both natural and human sciences and (ii) can serve as the opposite of the Boltzmann-Gibbs entropy, S, which is a measure of disorder. The possible rationales for the universality of PDE may include (i) the universality of the wave-particle duality embedded in PDE, (ii) the selection of subsets of random processes (thereby breaking the symmetry of GLE) as the basic mechanism of generating order, organization, and function, and (iii) the quantity-quality complementarity as the connection between PDE and Peircean semiotics.

  18. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  19. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  20. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the drought impacts in Texas counties in the past years, where the spatiotemporal dynamics are represented in areal data.

  1. PyGlobal: A toolkit for automated compilation of DFT-based descriptors.

    PubMed

    Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A

    2016-06-15

    Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  3. Log-correlated random-energy models with extensive free-energy fluctuations: Pathologies caused by rare events as signatures of phase transitions

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyu; Fyodorov, Yan V.; Le Doussal, Pierre

    2018-02-01

    We address systematically an apparent nonphysical behavior of the free-energy moment generating function for several instances of the logarithmically correlated models: the fractional Brownian motion with Hurst index H =0 (fBm0) (and its bridge version), a one-dimensional model appearing in decaying Burgers turbulence with log-correlated initial conditions and, finally, the two-dimensional log-correlated random-energy model (logREM) introduced in Cao et al. [Phys. Rev. Lett. 118, 090601 (2017), 10.1103/PhysRevLett.118.090601] based on the two-dimensional Gaussian free field with background charges and directly related to the Liouville field theory. All these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. We argue that a seemingly nonphysical vanishing of the moment generating function for some values of parameters is related to the termination point transition (i.e., prefreezing). We study the associated universal log corrections in the frozen phase, both for logREMs and for the standard REM, filling a gap in the literature. For the above mentioned integrable instances of logREMs, we predict the nontrivial free-energy cumulants describing non-Gaussian fluctuations on the top of the Gaussian with extensive variance. Some of the predictions are tested numerically.

  4. Identifying stochastic oscillations in single-cell live imaging time series using Gaussian processes

    PubMed Central

    Manning, Cerys; Rattray, Magnus

    2017-01-01

    Multiple biological processes are driven by oscillatory gene expression at different time scales. Pulsatile dynamics are thought to be widespread, and single-cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in expression dynamics, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. This underlies a new challenge to the experimentalist because neither intuition nor pre-existing methods work well for identifying oscillatory activity in noisy biological time series. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here, we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of non-parametric regression with Gaussian processes. Our method can distinguish oscillatory gene expression from random fluctuations of non-oscillatory expression in single-cell time series, despite peak-to-peak variability in period and amplitude of single-cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model and in experimental data. Analysis of bioluminescent live-cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MoMuLV 5’ LTR (MMLV) promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package. PMID:28493880

  5. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  6. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  7. Decoherence and tripartite entanglement dynamics in the presence of Gaussian and non-Gaussian classical noise

    NASA Astrophysics Data System (ADS)

    Kenfack, Lionel Tenemeza; Tchoffo, Martin; Fai, Lukong Cornelius; Fouokeng, Georges Collince

    2017-04-01

    We address the entanglement dynamics of a three-qubit system interacting with a classical fluctuating environment described either by a Gaussian or non-Gaussian noise in three different configurations namely: common, independent and mixed environments. Specifically, we focus on the Ornstein-Uhlenbeck (OU) noise and the random telegraph noise (RTN). The qubits are prepared in a state composed of a Greenberger-Horne-Zeilinger (GHZ) and a W state. With the help of the tripartite negativity, we show that the entanglement evolution is not only affected by the type of system-environment coupling but also by the kind and the memory properties of the considered noise. We also compared the dynamics induced by the two kinds of noise and we find that even if both noises have a Lorentzian spectrum, the effects of the OU noise cannot be in a simple way deduced from those of the RTN and vice-versa. In addition, we show that the entanglement can be indefinitely preserved when the qubits are coupled to the environmental noise in a common environment (CE). Finally, the presence or absence of peculiar phenomena such as entanglement revivals (ER) and entanglement sudden death (ESD) is observed.

  8. Strong monogamy of bipartite and genuine multipartite entanglement: the Gaussian case.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2007-10-12

    We demonstrate the existence of general constraints on distributed quantum correlations, which impose a trade-off on bipartite and multipartite entanglement at once. For all N-mode Gaussian states under permutation invariance, we establish exactly a monogamy inequality, stronger than the traditional one, that by recursion defines a proper measure of genuine N-partite entanglement. Strong monogamy holds as well for subsystems of arbitrary size, and the emerging multipartite entanglement measure is found to be scale invariant. We unveil its operational connection with the optimal fidelity of continuous variable teleportation networks.

  9. Security of coherent-state quantum cryptography in the presence of Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2007-08-15

    We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.

  10. Mean first-passage times of non-Markovian random walkers in confinement.

    PubMed

    Guérin, T; Levernier, N; Bénichou, O; Voituriez, R

    2016-06-16

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  11. Mean first-passage times of non-Markovian random walkers in confinement

    NASA Astrophysics Data System (ADS)

    Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.

    2016-06-01

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  12. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  13. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  14. Effects of vibration on occupant driving performance under simulated driving conditions.

    PubMed

    Azizan, Amzar; Fard, M; Azari, Michael F; Jazar, Reza

    2017-04-01

    Although much research has been devoted to the characterization of the effects of whole-body vibration on seated occupants' comfort, drowsiness induced by vibration has received less attention to date. There are also little validated measurement methods available to quantify whole body vibration-induced drowsiness. Here, the effects of vibration on drowsiness were investigated. Twenty male volunteers were recruited for this experiment. Drowsiness was measured in a driving simulator, before and after 30-min exposure to vibration. Gaussian random vibration, with 1-15 Hz frequency bandwidth was used for excitation. During the driving session, volunteers were required to obey the speed limit of 100 kph and maintain a steady position on the left-hand lane. A deviation in lane position, steering angle variability, and speed deviation were recorded and analysed. Alternatively, volunteers rated their subjective drowsiness by Karolinska Sleepiness Scale (KSS) scores every 5-min. Following 30-min of exposure to vibration, a significant increase of lane deviation, steering angle variability, and KSS scores were observed in all volunteers suggesting the adverse effects of vibration on human alertness level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  16. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  17. Multifractal Properties of Process Control Variables

    NASA Astrophysics Data System (ADS)

    Domański, Paweł D.

    2017-06-01

    Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.

  18. Implication of observed cloud variability for parameterizations of microphysical and radiative transfer processes in climate models

    NASA Astrophysics Data System (ADS)

    Huang, D.; Liu, Y.

    2014-12-01

    The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.

  19. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  20. Pseudo-steady-state non-Gaussian Einstein-Podolsky-Rosen steering of massive particles in pumped and damped Bose-Hubbard dimers

    NASA Astrophysics Data System (ADS)

    Olsen, M. K.

    2017-02-01

    We propose and analyze a pumped and damped Bose-Hubbard dimer as a source of continuous-variable Einstein-Podolsky-Rosen (EPR) steering with non-Gaussian statistics. We use and compare the results of the approximate truncated Wigner and the exact positive-P representation to calculate and compare the predictions for intensities, second-order quantum correlations, and third- and fourth-order cumulants. We find agreement for intensities and the products of inferred quadrature variances, which indicate that states demonstrating the EPR paradox are present. We find clear signals of non-Gaussianity in the quantum states of the modes from both the approximate and exact techniques, with quantitative differences in their predictions. Our proposed experimental configuration is extrapolated from current experimental techniques and adds another apparatus to the current toolbox of quantum atom optics.

  1. Non-Gaussian bias: insights from discrete density peaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch

    2013-09-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less

  2. Information geometry of Gaussian channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monras, Alex; CNR-INFM Coherentia, Napoli; CNISM Unita di Salerno

    2010-06-15

    We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirablemore » properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).« less

  3. Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework

    NASA Astrophysics Data System (ADS)

    Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.

    2018-01-01

    Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).

  4. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  5. Geographically weighted regression model on poverty indicator

    NASA Astrophysics Data System (ADS)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  6. Heralded processes on continuous-variable spaces as quantum maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  7. Nonparametric triple collocation

    USDA-ARS?s Scientific Manuscript database

    Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...

  8. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  9. Resampling methods in Microsoft Excel® for estimating reference intervals

    PubMed Central

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  10. Resampling methods in Microsoft Excel® for estimating reference intervals.

    PubMed

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  11. Langevin dynamics for ramified structures

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Iomin, Alexander; Horsthemke, Werner; Campos, Daniel

    2017-06-01

    We propose a generalized Langevin formalism to describe transport in combs and similar ramified structures. Our approach consists of a Langevin equation without drift for the motion along the backbone. The motion along the secondary branches may be described either by a Langevin equation or by other types of random processes. The mean square displacement (MSD) along the backbone characterizes the transport through the ramified structure. We derive a general analytical expression for this observable in terms of the probability distribution function of the motion along the secondary branches. We apply our result to various types of motion along the secondary branches of finite or infinite length, such as subdiffusion, superdiffusion, and Langevin dynamics with colored Gaussian noise and with non-Gaussian white noise. Monte Carlo simulations show excellent agreement with the analytical results. The MSD for the case of Gaussian noise is shown to be independent of the noise color. We conclude by generalizing our analytical expression for the MSD to the case where each secondary branch is n dimensional.

  12. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  13. Recovering Galaxy Properties Using Gaussian Process SED Fitting

    NASA Astrophysics Data System (ADS)

    Iyer, Kartheik; Awan, Humna

    2018-01-01

    Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.

  14. Evidence for higher reaction time variability for children with ADHD on a range of cognitive tasks including reward and event rate manipulations

    PubMed Central

    Epstein, Jeffery N.; Langberg, Joshua M.; Rosen, Paul J.; Graham, Amanda; Narad, Megan E.; Antonini, Tanya N.; Brinkman, William B.; Froehlich, Tanya; Simon, John O.; Altaye, Mekibib

    2012-01-01

    Objective The purpose of the research study was to examine the manifestation of variability in reaction times (RT) in children with Attention Deficit Hyperactivity Disorder (ADHD) and to examine whether RT variability presented differently across a variety of neuropsychological tasks, was present across the two most common ADHD subtypes, and whether it was affected by reward and event rate (ER) manipulations. Method Children with ADHD-Combined Type (n=51), ADHD-Predominantly Inattentive Type (n=53) and 47 controls completed five neuropsychological tasks (Choice Discrimination Task, Child Attentional Network Task, Go/No-Go task, Stop Signal Task, and N-back task), each allowing trial-by-trial assessment of reaction times. Multiple indicators of RT variability including RT standard deviation, coefficient of variation and ex-Gaussian tau were used. Results Children with ADHD demonstrated greater RT variability than controls across all five tasks as measured by the ex-Gaussian indicator tau. There were minimal differences in RT variability across the ADHD subtypes. Children with ADHD also had poorer task accuracy than controls across all tasks except the Choice Discrimination task. Although ER and reward manipulations did affect children’s RT variability and task accuracy, these manipulations largely did not differentially affect children with ADHD compared to controls. RT variability and task accuracy were highly correlated across tasks. Removing variance attributable to RT variability from task accuracy did not appreciably affect between-group differences in task accuracy. Conclusions High RT variability is a ubiquitous and robust phenomenon in children with ADHD. PMID:21463041

  15. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  16. Identification of degenerate neuronal systems based on intersubject variability.

    PubMed

    Noppeney, Uta; Penny, Will D; Price, Cathy J; Flandin, Guillaume; Friston, Karl J

    2006-04-15

    Group studies implicitly assume that all subjects activate one common system to sustain a particular cognitive task. Intersubject variability is generally treated as well-behaved and uninteresting noise. However, intersubject variability might result from subjects engaging different degenerate neuronal systems that are each sufficient for task performance. This would produce a multimodal distribution of intersubject variability. We have explored this idea with the help of Gaussian Mixture Modeling and Bayesian model comparison procedures. We illustrate our approach using a crossmodal priming paradigm, in which subjects perform a semantic decision on environmental sounds or their spoken names that were preceded by a semantically congruent or incongruent picture or written name. All subjects consistently activated the superior temporal gyri bilaterally, the left fusiform gyrus and the inferior frontal sulcus. Comparing a One and Two Gaussian Mixture Model of the unexplained residuals provided very strong evidence for two groups with distinct activation patterns: 6 subjects exhibited additional activations in the superior temporal sulci bilaterally, the right superior frontal and central sulcus. 11 subjects showed increased activation in the striate and the right inferior parietal cortex. These results suggest that semantic decisions on auditory-visual compound stimuli might be accomplished by two overlapping degenerate neuronal systems.

  17. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.

  19. Irradiation direction from texture

    NASA Astrophysics Data System (ADS)

    Koenderink, Jan J.; Pont, Sylvia C.

    2003-10-01

    We present a theory of image texture resulting from the shading of corrugated (three-dimensional textured) surfaces, Lambertian on the micro scale, in the domain of geometrical optics. The derivation applies to isotropic Gaussian random surfaces, under collimated illumination, in normal view. The theory predicts the structure tensors from either the gradient or the Hessian of the image intensity and allows inferences of the direction of irradiation of the surface. Although the assumptions appear prima facie rather restrictive, even for surfaces that are not at all Gaussian, with the bidirectional reflectance distribution function far from Lambertian and vignetting and multiple scattering present, we empirically recover the direction of irradiation with an accuracy of a few degrees.

  20. Pinning time statistics for vortex lines in disordered environments.

    PubMed

    Dobramysl, Ulrich; Pleimling, Michel; Täuber, Uwe C

    2014-12-01

    We study the pinning dynamics of magnetic flux (vortex) lines in a disordered type-II superconductor. Using numerical simulations of a directed elastic line model, we extract the pinning time distributions of vortex line segments. We compare different model implementations for the disorder in the surrounding medium: discrete, localized pinning potential wells that are either attractive and repulsive or purely attractive, and whose strengths are drawn from a Gaussian distribution; as well as continuous Gaussian random potential landscapes. We find that both schemes yield power-law distributions in the pinned phase as predicted by extreme-event statistics, yet they differ significantly in their effective scaling exponents and their short-time behavior.

  1. A general method for the definition of margin recipes depending on the treatment technique applied in helical tomotherapy prostate plans.

    PubMed

    Sevillano, David; Mínguez, Cristina; Sánchez, Alicia; Sánchez-Reyes, Alberto

    2016-01-01

    To obtain specific margin recipes that take into account the dosimetric characteristics of the treatment plans used in a single institution. We obtained dose-population histograms (DPHs) of 20 helical tomotherapy treatment plans for prostate cancer by simulating the effects of different systematic errors (Σ) and random errors (σ) on these plans. We obtained dosimetric margins and margin reductions due to random errors (random margins) by fitting the theoretical results of coverages for Gaussian distributions with coverages of the planned D99% obtained from the DPHs. The dosimetric margins obtained for helical tomotherapy prostate treatments were 3.3 mm, 3 mm, and 1 mm in the lateral (Lat), anterior-posterior (AP), and superior-inferior (SI) directions. Random margins showed parabolic dependencies, yielding expressions of 0.16σ(2), 0.13σ(2), and 0.15σ(2) for the Lat, AP, and SI directions, respectively. When focusing on values up to σ = 5 mm, random margins could be fitted considering Gaussian penumbras with standard deviations (σp) equal to 4.5 mm Lat, 6 mm AP, and 5.5 mm SI. Despite complex dose distributions in helical tomotherapy treatment plans, we were able to simplify the behaviour of our plans against treatment errors to single values of dosimetric and random margins for each direction. These margins allowed us to develop specific margin recipes for the respective treatment technique. The method is general and could be used for any treatment technique provided that DPHs can be obtained. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Preservation of Gaussian state entanglement in a quantum beat laser by reservoir engineering

    NASA Astrophysics Data System (ADS)

    Qurban, Misbah; Islam, Rameez ul; Ge, Guo-Qin; Ikram, Manzoor

    2018-04-01

    Quantum beat lasers have been considered as sources of entangled radiation in continuous variables such as Gaussian states. In order to preserve entanglement and to minimize entanglement degradation due to the system’s interaction with the surrounding environment, we propose to engineer environment modes through insertion of another system in between the laser resonator and the environment. This makes the environment surrounding the two-mode laser a structured reservoir. It not only enhances the entanglement among two modes of the laser but also preserves the entanglement for sufficiently longer times, a stringent requirement for quantum information processing tasks.

  3. Strong subadditivity for log-determinant of covariance matrices and its applications

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Simon, R.

    2016-08-01

    We prove that the log-determinant of the covariance matrix obeys the strong subadditivity inequality for arbitrary tripartite states of multimode continuous variable quantum systems. This establishes general limitations on the distribution of information encoded in the second moments of canonically conjugate operators. The inequality is shown to be stronger than the conventional strong subadditivity inequality for von Neumann entropy in a class of pure tripartite Gaussian states. We finally show that such an inequality implies a strict monogamy-type constraint for joint Einstein-Podolsky-Rosen steerability of single modes by Gaussian measurements performed on multiple groups of modes.

  4. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    USGS Publications Warehouse

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  5. Continuous-variable measurement-device-independent quantum key distribution: Composable security against coherent attacks

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-05-01

    We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.

  6. Entropy generation in Gaussian quantum transformations: applying the replica method to continuous-variable quantum information theory

    NASA Astrophysics Data System (ADS)

    Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.

    2016-02-01

    In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.

  7. Inferring time derivatives including cell growth rates using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  8. Assessing medication effects in the MTA study using neuropsychological outcomes.

    PubMed

    Epstein, Jeffery N; Conners, C Keith; Hervey, Aaron S; Tonev, Simon T; Arnold, L Eugene; Abikoff, Howard B; Elliott, Glen; Greenhill, Laurence L; Hechtman, Lily; Hoagwood, Kimberly; Hinshaw, Stephen P; Hoza, Betsy; Jensen, Peter S; March, John S; Newcorn, Jeffrey H; Pelham, William E; Severe, Joanne B; Swanson, James M; Wells, Karen; Vitiello, Benedetto; Wigal, Timothy

    2006-05-01

    While studies have increasingly investigated deficits in reaction time (RT) and RT variability in children with attention deficit/hyperactivity disorder (ADHD), few studies have examined the effects of stimulant medication on these important neuropsychological outcome measures. 316 children who participated in the Multimodal Treatment Study of Children with ADHD (MTA) completed the Conners' Continuous Performance Test (CPT) at the 24-month assessment point. Outcome measures included standard CPT outcomes (e.g., errors of commission, mean hit reaction time (RT)) and RT indicators derived from an Ex-Gaussian distributional model (i.e., mu, sigma, and tau). Analyses revealed significant effects of medication across all neuropsychological outcome measures. Results on the Ex-Gaussian outcome measures revealed that stimulant medication slows RT and reduces RT variability. This demonstrates the importance of including analytic strategies that can accurately model the actual distributional pattern, including the positive skew. Further, the results of the study relate to several theoretical models of ADHD.

  9. Monitoring of continuous-variable quantum key distribution system in real environment.

    PubMed

    Liu, Weiqi; Peng, Jinye; Huang, Peng; Huang, Duan; Zeng, Guihua

    2017-08-07

    How to guarantee the practical security of continuous-variable quantum key distribution (CVQKD) system has been an important issue in the quantum cryptography applications. In contrast to the previous practical security strategies, which focus on the intercept-resend attack or the Gaussian attack, we investigate the practical security strategy based on a general attack, i.e., an arbitrated individual attack or collective attack on the system by Eve in this paper. The low bound of intensity disturbance of the local oscillator signal for eavesdropper successfully concealing herself is obtained, considering all noises can be used by Eve in the practical environment. Furthermore, we obtain an optimal monitoring condition for the practical CVQKD system so that legitimate communicators can monitor the general attack in real-time. As examples, practical security of two special systems, i.e., the Gaussian modulated coherent state CVQKD system and the middle-based CVQKD system, are investigated under the intercept-resend attacks.

  10. Phase transition and entropy inequality of noncommutative black holes in a new extended phase space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yan-Gang; Xu, Zhen-Ming, E-mail: miaoyg@nankai.edu.cn, E-mail: xuzhenm@mail.nankai.edu.cn

    We analyze the thermodynamics of the noncommutative high-dimensional Schwarzschild-Tangherlini AdS black hole with the non-Gaussian smeared matter distribution by regarding a noncommutative parameter as an independent thermodynamic variable named as the noncommutative pressure . In the new extended phase space that includes this noncommutative pressure and its conjugate variable, we reveal that the noncommutative pressure and the original thermodynamic pressure related to the negative cosmological constant make the opposite effects in the phase transition of the noncommutative black hole, i.e. the former dominates the UV regime while the latter does the IR regime, respectively. In addition, by means of themore » reverse isoperimetric inequality, we indicate that only the black hole with the Gaussian smeared matter distribution holds the maximum entropy for a given thermodynamic volume among the noncommutative black holes with various matter distributions.« less

  11. Divergence-free approach for obtaining decompositions of quantum-optical processes

    NASA Astrophysics Data System (ADS)

    Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.

    2018-02-01

    Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.

  12. Geometric characterization of separability and entanglement in pure Gaussian states by single-mode unitary operations

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio

    2007-10-01

    We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.

  13. Purity of Gaussian states: Measurement schemes and time evolution in noisy channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio

    2003-07-01

    We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermalmore » baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state.« less

  14. Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing; Evans, Philip G.; Grice, Warren P.

    In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less

  15. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution

    DOE PAGES

    Qi, Bing; Evans, Philip G.; Grice, Warren P.

    2018-01-01

    In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less

  17. Multi-field inflation with a random potential

    NASA Astrophysics Data System (ADS)

    Tye, S.-H. Henry; Xu, Jiajun; Zhang, Yang

    2009-04-01

    Motivated by the possibility of inflation in the cosmic landscape, which may be approximated by a complicated potential, we study the density perturbations in multi-field inflation with a random potential. The random potential causes the inflaton to undergo a Brownian-like motion with a drift in the D-dimensional field space, allowing entropic perturbation modes to continuously and randomly feed into the adiabatic mode. To quantify such an effect, we employ a stochastic approach to evaluate the two-point and three-point functions of primordial perturbations. We find that in the weakly random scenario where the stochastic scatterings are frequent but mild, the resulting power spectrum resembles that of the single field slow-roll case, with up to 2% more red tilt. The strongly random scenario, in which the coarse-grained motion of the inflaton is significantly slowed down by the scatterings, leads to rich phenomenologies. The power spectrum exhibits primordial fluctuations on all angular scales. Such features may already be hiding in the error bars of observed CMB TT (as well as TE and EE) power spectrum and have been smoothed out by binning of data points. With more data coming in the future, we expect these features can be detected or falsified. On the other hand the tensor power spectrum itself is free of fluctuations and the tensor to scalar ratio is enhanced by the large ratio of the Brownian-like motion speed over the drift speed. In addition a large negative running of the power spectral index is possible. Non-Gaussianity is generically suppressed by the growth of adiabatic perturbations on super-horizon scales, and is negligible in the weakly random scenario. However, non-Gaussianity can possibly be enhanced by resonant effects in the strongly random scenario or arise from the entropic perturbations during the onset of (p)reheating if the background inflaton trajectory exhibits particular properties. The formalism developed in this paper can be applied to a wide class of multi-field inflation models including, e.g. the N-flation scenario.

  18. Rapid automatized naming (RAN) in children with ADHD: An ex-Gaussian analysis.

    PubMed

    Ryan, Matthew; Jacobson, Lisa A; Hague, Cole; Bellows, Alison; Denckla, Martha B; Mahone, E Mark

    2017-07-01

    Children with ADHD demonstrate increased frequent "lapses" in performance on tasks in which the stimulus presentation rate is externally controlled, leading to increased variability in response times. It is less clear whether these lapses are also evident during performance on self-paced tasks, e.g., rapid automatized naming (RAN), or whether RAN inter-item pause time variability uniquely predicts reading performance. A total of 80 children aged 9 to 14 years-45 children with attention-deficit/hyperactivity disorder (ADHD) and 35 typically developing (TD) children-completed RAN and reading fluency measures. RAN responses were digitally recorded for analyses. Inter-stimulus pause time distributions (excluding between-row pauses) were analyzed using traditional (mean, standard deviation [SD], coefficient of variation [CV]) and ex-Gaussian (mu, sigma, tau) methods. Children with ADHD were found to be significantly slower than TD children (p < .05) on RAN letter naming mean response time as well as on oral and silent reading fluency. RAN response time distributions were also significantly more variable (SD, tau) in children with ADHD. Hierarchical regression revealed that the exponential component (tau) of the letter-naming response time distribution uniquely predicted reading fluency in children with ADHD (p < .001, ΔR 2  = .16), even after controlling for IQ, basic reading, ADHD symptom severity and age. The findings suggest that children with ADHD (without word-level reading difficulties) manifest slowed performance on tasks of reading fluency; however, this "slowing" may be due in part to lapses from ongoing performance that can be assessed directly using ex-Gaussian methods that capture excessively long response times.

  19. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  20. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

Top