Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE
INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...
Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M
2018-06-01
This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei
2018-02-01
For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Hsi, W; Zhao, J
2016-06-15
Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-07-01
The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
Effects of scale-dependent non-Gaussianity on cosmological structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
LoVerde, Marilena; Miller, Amber; Shandera, Sarah
2008-04-15
The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less
Non-Gaussianities in multifield DBI inflation with a waterfall phase transition
NASA Astrophysics Data System (ADS)
Kidani, Taichi; Koyama, Kazuya; Mizuno, Shuntaro
2012-10-01
We study multifield Dirac-Born-Infeld (DBI) inflation models with a waterfall phase transition. This transition happens for a D3 brane moving in the warped conifold if there is an instability along angular directions. The transition converts the angular perturbations into the curvature perturbation. Thanks to this conversion, multifield models can evade the stringent constraints that strongly disfavor single field ultraviolet (UV) DBI inflation models in string theory. We explicitly demonstrate that our model satisfies current observational constraints on the spectral index and equilateral non-Gaussianity as well as the bound on the tensor to scalar ratio imposed in string theory models. In addition, we show that large local type non-Gaussianity is generated together with equilateral non-Gaussianity in this model.
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
Biasing and the search for primordial non-Gaussianity beyond the local type
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleyzes, Jérôme; De Putter, Roland; Doré, Olivier
Primordial non-Gaussianity encodes valuable information about the physics of inflation, including the spectrum of particles and interactions. Significant improvements in our understanding of non-Gaussanity beyond Planck require information from large-scale structure. The most promising approach to utilize this information comes from the scale-dependent bias of halos. For local non-Gaussanity, the improvements available are well studied but the potential for non-Gaussianity beyond the local type, including equilateral and quasi-single field inflation, is much less well understood. In this paper, we forecast the capabilities of large-scale structure surveys to detect general non-Gaussianity through galaxy/halo power spectra. We study how non-Gaussanity can bemore » distinguished from a general biasing model and where the information is encoded. For quasi-single field inflation, significant improvements over Planck are possible in some regions of parameter space. We also show that the multi-tracer technique can significantly improve the sensitivity for all non-Gaussianity types, providing up to an order of magnitude improvement for equilateral non-Gaussianity over the single-tracer measurement.« less
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Constraints on single-field inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico
2016-06-28
Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Loop corrections to primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Boran, Sibel; Kahya, E. O.
2018-02-01
We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil
2018-02-13
Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.
Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1992-01-01
In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, M. L.; Liu, B.; Hu, R. H.
In the case of a thin plasma slab accelerated by the radiation pressure of an ultra-intense laser pulse, the development of Rayleigh-Taylor instability (RTI) will destroy the acceleration structure and terminate the acceleration process much sooner than theoretical limit. In this paper, a new scheme using multiple Gaussian pulses for ion acceleration in a radiation pressure acceleration regime is investigated with particle-in-cell simulation. We found that with multiple Gaussian pulses, the instability could be efficiently suppressed and the divergence of the ion bunch is greatly reduced, resulting in a longer acceleration time and much more collimated ion bunch with highermore » energy than using a single Gaussian pulse. An analytical model is developed to describe the suppression of RTI at the laser-plasma interface. The model shows that the suppression of RTI is due to the introduction of the long wavelength mode RTI by the multiple Gaussian pulses.« less
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
Revisiting non-Gaussianity from non-attractor inflation models
NASA Astrophysics Data System (ADS)
Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei
2018-05-01
Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.
Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions
NASA Astrophysics Data System (ADS)
Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.
2015-12-01
Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Mulkern, Robert V; Balasubramanian, Mukund; Mitsouras, Dimitrios
2014-07-30
To determine whether Lorentzian or Gaussian intra-voxel frequency distributions are better suited for modeling data acquired with gradient-echo sampling of single spin-echoes for the simultaneous characterization of irreversible and reversible relaxation rates. Clinical studies (e.g., of brain iron deposition) using such acquisition schemes have typically assumed Lorentzian distributions. Theoretical expressions of the time-domain spin-echo signal for intra-voxel Lorentzian and Gaussian distributions were used to fit data from a human brain scanned at both 1.5 Tesla (T) and 3T, resulting in maps of irreversible and reversible relaxation rates for each model. The relative merits of the Lorentzian versus Gaussian model were compared by means of quality of fit considerations. Lorentzian fits were equivalent to Gaussian fits primarily in regions of the brain where irreversible relaxation dominated. In the multiple brain regions where reversible relaxation effects become prominent, however, Gaussian fits were clearly superior. The widespread assumption that a Lorentzian distribution is suitable for quantitative transverse relaxation studies of the brain should be reconsidered, particularly at 3T and higher field strengths as reversible relaxation effects become more prominent. Gaussian distributions offer alternate fits of experimental data that should prove quite useful in general. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
NASA Technical Reports Server (NTRS)
Selle, L. C.; Bellan, Josette
2006-01-01
Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate modeling.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Theory and generation of conditional, scalable sub-Gaussian random fields
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2016-03-01
Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
Distillation and purification of symmetric entangled Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiurasek, Jaromir
2010-10-15
We propose an entanglement distillation and purification scheme for symmetric two-mode entangled Gaussian states that allows to asymptotically extract a pure entangled Gaussian state from any input entangled symmetric Gaussian state. The proposed scheme is a modified and extended version of the entanglement distillation protocol originally developed by Browne et al. [Phys. Rev. A 67, 062320 (2003)]. A key feature of the present protocol is that it utilizes a two-copy degaussification procedure that involves a Mach-Zehnder interferometer with single-mode non-Gaussian filters inserted in its two arms. The required non-Gaussian filtering operations can be implemented by coherently combining two sequences ofmore » single-photon addition and subtraction operations.« less
Gaussian entanglement generation from coherence using beam-splitters
Wang, Zhong-Xiao; Wang, Shuhao; Ma, Teng; Wang, Tie-Jun; Wang, Chuan
2016-01-01
The generation and quantification of quantum entanglement is crucial for quantum information processing. Here we study the transition of Gaussian correlation under the effect of linear optical beam-splitters. We find the single-mode Gaussian coherence acts as the resource in generating Gaussian entanglement for two squeezed states as the input states. With the help of consecutive beam-splitters, single-mode coherence and quantum entanglement can be converted to each other. Our results reveal that by using finite number of beam-splitters, it is possible to extract all the entanglement from the single-mode coherence even if the entanglement is wiped out before each beam-splitter. PMID:27892537
CMB constraints on running non-Gaussianity
NASA Astrophysics Data System (ADS)
Oppizzi, F.; Liguori, M.; Renzi, A.; Arroja, F.; Bartolo, N.
2018-05-01
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the fNL running spectral index, nNG, using WMAP 9-year data. Our final bounds (68% C.L.) read ‑0.6< nNG<1.4}, ‑0.3< nNG<1.2, ‑1.1
Single-frequency Ince-Gaussian mode operations of laser-diode-pumped microchip solid-state lasers.
Ohtomo, Takayuki; Kamikariya, Koji; Otsuka, Kenju; Chu, Shu-Chun
2007-08-20
Various single-frequency Ince-Gaussian mode oscillations have been achieved in laser-diode-pumped microchip solid-state lasers, including LiNdP(4)O(12) (LNP) and Nd:GdVO(4), by adjusting the azimuthal symmetry of the short laser resonator. Ince-Gaussian modes formed by astigmatic pumping have been reproduced by numerical simulation.
Hollow sinh-Gaussian beams and their paraxial properties.
Sun, Qiongge; Zhou, Keya; Fang, Guangyu; Zhang, Guoqiang; Liu, Zhengjun; Liu, Shutian
2012-04-23
A new mathematical model of dark-hollow beams, described as hollow sinh-Gaussian (HsG) beams, has been introduced. The intensity distributions of HsG beams are characterized by a single bright ring along the propagation whose size is determined by the order of beams; the shape of the ring can be controlled by beam width and this leads to the elliptical HsG beams. Propagation characteristics of HsG beams through an ABCD optical system have been researched, they can be regarded as superposition of a series of Hypergeometric-Gaussian (HyGG) beams. As a numerical example, the propagation characteristics of HsG beams in free space have been demonstrated graphically. © 2012 Optical Society of America
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C
2016-02-01
A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.
A model of non-Gaussian diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2018-04-01
Recent progress in single-particle tracking has shown evidence of the non-Gaussian distribution of displacements in living cells, both near the cellular membrane and inside the cytoskeleton. Similar behavior has also been observed in granular materials, turbulent flows, gels and colloidal suspensions, suggesting that this is a general feature of diffusion in complex media. A possible interpretation of this phenomenon is that a tracer explores a medium with spatio-temporal fluctuations which result in local changes of diffusivity. We propose and investigate an ergodic, easily interpretable model, which implements the concept of diffusing diffusivity. Depending on the parameters, the distribution of displacements can be either flat or peaked at small displacements with an exponential tail at large displacements. We show that the distribution converges slowly to a Gaussian one. We calculate statistical properties, derive the asymptotic behavior and discuss some implications and extensions.
THE DISTRIBUTION OF COOK’S D STATISTIC
Muller, Keith E.; Mok, Mario Chen
2013-01-01
Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yu-Bin; Cai, Yi-Fu; Quintin, Jerome
We extend the matter bounce scenario to a more general theory in which the background dynamics and cosmological perturbations are generated by a k -essence scalar field with an arbitrary sound speed. When the sound speed is small, the curvature perturbation is enhanced, and the tensor-to-scalar ratio, which is excessively large in the original model, can be sufficiently suppressed to be consistent with observational bounds. Then, we study the primordial three-point correlation function generated during the matter-dominated contraction stage and find that it only depends on the sound speed parameter. Similar to the canonical case, the shape of the bispectrummore » is mainly dominated by a local form, though for some specific sound speed values a new shape emerges and the scaling behaviour changes. Meanwhile, a small sound speed also results in a large amplitude of non-Gaussianities, which is disfavored by current observations. As a result, it does not seem possible to suppress the tensor-to-scalar ratio without amplifying the production of non-Gaussianities beyond current observational constraints (and vice versa). This suggests an extension of the previously conjectured no-go theorem in single field nonsingular matter bounce cosmologies, which rules out a large class of models. However, the non-Gaussianity results remain as a distinguishable signature of matter bounce cosmology and have the potential to be detected by observations in the near future.« less
Ergodicity of financial indices
NASA Astrophysics Data System (ADS)
Kolesnikov, A. V.; Rühl, T.
2010-05-01
We introduce the concept of the ensemble averaging for financial markets. We address the question of equality of ensemble and time averaging in their sequence and investigate if these averagings are equivalent for large amount of equity indices and branches. We start with the model of Gaussian-distributed returns, equal-weighted stocks in each index and absence of correlations within a single day and show that even this oversimplified model captures already the run of the corresponding index reasonably well due to its self-averaging properties. We introduce the concept of the instant cross-sectional volatility and discuss its relation to the ordinary time-resolved counterpart. The role of the cross-sectional volatility for the description of the corresponding index as well as the role of correlations between the single stocks and the role of non-Gaussianity of stock distributions is briefly discussed. Our model reveals quickly and efficiently some anomalies or bubbles in a particular financial market and gives an estimate of how large these effects can be and how quickly they disappear.
Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid
2017-02-01
The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2006-01-01
The abilities of cloud-resolving models (CRMs) with the double-Gaussian based and the single-Gaussian based third-order closures (TOCs) to simulate the shallow cumuli and their transition to deep convective clouds are compared in this study. The single-Gaussian based TOC is fully prognostic (FP), while the double-Gaussian based TOC is partially prognostic (PP). The latter only predicts three important third-order moments while the former predicts all the thirdorder moments. A shallow cumulus case is simulated by single-column versions of the FP and PP TOC models. The PP TOC improves the simulation of shallow cumulus greatly over the FP TOC by producing more realistic cloud structures. Large differences between the FP and PP TOC simulations appear in the cloud layer of the second- and third-order moments, which are related mainly to the underestimate of the cloud height in the FP TOC simulation. Sensitivity experiments and analysis of probability density functions (PDFs) used in the TOCs show that both the turbulence-scale condensation and higher-order moments are important to realistic simulations of the boundary-layer shallow cumuli. A shallow to deep convective cloud transition case is also simulated by the 2-D versions of the FP and PP TOC models. Both CRMs can capture the transition from the shallow cumuli to deep convective clouds. The PP simulations produce more and deeper shallow cumuli than the FP simulations, but the FP simulations produce larger and wider convective clouds than the PP simulations. The temporal evolutions of cloud and precipitation are closely related to the turbulent transport, the cold pool and the cloud-scale circulation. The large amount of turbulent mixing associated with the shallow cumuli slows down the increase of the convective available potential energy and inhibits the early transition to deep convective clouds in the PP simulation. When the deep convective clouds fully develop and the precipitation is produced, the cold pools produced by the evaporation of the precipitation are not favorable to the formation of shallow cumuli.
How Gaussian can our Universe be?
NASA Astrophysics Data System (ADS)
Cabass, G.; Pajer, E.; Schmidt, F.
2017-01-01
Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of kl2/ks2, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × (ns-1).
Generation of helical Ince-Gaussian beams: beam-shaping with a liquid crystal display
NASA Astrophysics Data System (ADS)
Davis, Jeffrey A.; Bentley, Joel B.; Bandres, Miguel A.; Gutiérrez-Vega, Julio C.
2006-08-01
We review the three types of laser beams - Hermite-Gaussian (HG), Laguerre-Gaussian (LG) and the newly discovered Ince-Gaussian (IG) beams. We discuss the helical forms of the LG and IG beams that consist of linear combinations of the even and odd solutions and form a number of vortices that are useful for optical trapping applications. We discuss how to generate these beams by encoding the desired amplitude and phase onto a single parallel-aligned liquid crystal display (LCD). We introduce a novel interference technique where we generate both the object and reference beams using a single LCD and show the vortex interference patterns.
NASA Astrophysics Data System (ADS)
Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne
2017-10-01
Dispersion of road transport emissions in urban metropolitan areas is typically simulated using Gaussian models that ignore the turbulence and drag induced by buildings, which are especially relevant for areas with dense downtown cores. To consider the effect of buildings, street canyon models are used but often at the level of single urban corridors and small road networks. In this paper, we compare and validate two dispersion models with widely varying algorithms, across a modelling domain consisting of the City of Montreal, Canada accounting for emissions of more 40,000 roads. The first dispersion model is based on flow decomposition into the urban canopy sub-flow as well as overlying airflow. It takes into account the specific height and geometry of buildings along each road. The second model is a Gaussian puff dispersion model, which handles complex terrain and incorporates three-dimensional meteorology, but accounts for buildings only through variations in the initial vertical mixing coefficient. Validation against surface observations indicated that both models under-predicted measured concentrations. Average weekly exposure surfaces derived from both models were found to be reasonably correlated (r = 0.8) although the Gaussian dispersion model tended to underestimate concentrations around the roadways compared to the street canyon model. In addition, both models were used to estimate exposures of a representative sample of the Montreal population composed of 1319 individuals. Large differences were noted whereby exposures derived from the Gaussian puff model were significantly lower than exposures derived from the street canyon model, an expected result considering the concentration of population around roadways. These differences have large implications for the analyses of health effects associated with NO2 exposure.
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
NASA Astrophysics Data System (ADS)
Mu, Hongqian; Wang, Muguang; Tang, Yu; Zhang, Jing; Jian, Shuisheng
2018-03-01
A novel scheme for the generation of FCC-compliant UWB pulse is proposed based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion. The modified Gaussian quadruplet is synthesized based on linear sum of a broad Gaussian pulse and two narrow Gaussian pulses with the same pulse-width and amplitude peak. Within specific parameter range, FCC-compliant UWB with spectral power efficiency of higher than 39.9% can be achieved. In order to realize the designed waveform, a UWB generator based on spectral shaping and incoherent wavelength-to-time mapping is proposed. The spectral shaper is composed of a Gaussian filter and a programmable filter. Single-mode fiber functions as both dispersion device and transmission medium. Balanced photodetection is employed to combine linearly the broad Gaussian pulse and two narrow Gaussian pulses, and at same time to suppress pulse pedestals that result in low-frequency components. The proposed UWB generator can be reconfigured for UWB doublet by operating the programmable filter as a single-band Gaussian filter. The feasibility of proposed UWB generator is demonstrated experimentally. Measured UWB pulses match well with simulation results. FCC-compliant quadruplet with 10-dB bandwidth of 6.88-GHz, fractional bandwidth of 106.8% and power efficiency of 51% is achieved.
Modeling Array Stations in SIG-VISA
NASA Astrophysics Data System (ADS)
Ding, N.; Moore, D.; Russell, S.
2013-12-01
We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.
Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.
Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H
2016-11-01
Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.
Non-Gaussian quantum states generation and robust quantum non-Gaussianity via squeezing field
NASA Astrophysics Data System (ADS)
Tang, Xu-Bing; Gao, Fang; Wang, Yao-Xiong; Kuang, Sen; Shuang, Feng
2015-03-01
Recent studies show that quantum non-Gaussian states or using non-Gaussian operations can improve entanglement distillation, quantum swapping, teleportation, and cloning. In this work, employing a strategy of non-Gaussian operations (namely subtracting and adding a single photon), we propose a scheme to generate non-Gaussian quantum states named single-photon-added and -subtracted coherent (SPASC) superposition states by implementing Bell measurements, and then investigate the corresponding nonclassical features. By squeezed the input field, we demonstrate that robustness of non-Gaussianity can be improved. Controllable phase space distribution offers the possibility to approximately generate a displaced coherent superposition states (DCSS). The fidelity can reach up to F ≥ 0.98 and F ≥ 0.90 for size of amplitude z = 1.53 and 2.36, respectively. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203061 and 61074052), the Outstanding Young Talent Foundation of Anhui Province, China (Grant No. 2012SQRL040), and the Natural Science Foundation of Anhui Province, China (Grant No. KJ2012Z035).
Time-optimal thermalization of single-mode Gaussian states
NASA Astrophysics Data System (ADS)
Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio
2014-11-01
We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Infrared and Raman Spectroscopy: A Discovery-Based Activity for the General Chemistry Curriculum
ERIC Educational Resources Information Center
Borgsmiller, Karen L.; O'Connell, Dylan J.; Klauenberg, Kathryn M.; Wilson, Peter M.; Stromberg, Christopher J.
2012-01-01
A discovery-based method is described for incorporating the concepts of IR and Raman spectroscopy into the general chemistry curriculum. Students use three sets of springs to model the properties of single, double, and triple covalent bonds. Then, Gaussian 03W molecular modeling software is used to illustrate the relationship between bond…
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
NASA Astrophysics Data System (ADS)
Colomb, Warren; Sarkar, Susanta K.
2015-06-01
We would like to thank all the commentators for their constructive comments on our paper. Commentators agree that a proper analysis of noisy single-molecule data is important for extracting meaningful and accurate information about the system. We concur with their views and indeed, motivating an accurate analysis of experimental data is precisely the point of our paper. After a model about the system of interest is constructed based on the experimental single-molecule data, it is very helpful to simulate the model to generate theoretical single-molecule data and analyze exactly the same way. In our experience, such self-consistent approach involving experiments, simulations, and analyses often forces us to revise our model and make experimentally testable predictions. In light of comments from the commentators with different expertise, we would also like to point out that a single model should be able to connect different experimental techniques because the underlying science does not depend on the experimental techniques used. Wohland [1] has made a strong case for fluorescence correlation spectroscopy (FCS) as an important experimental technique to bridge single-molecule and ensemble experiments. FCS is a very powerful technique that can measure ensemble parameters with single-molecule sensitivity. Therefore, it is logical to simulate any proposed model and predict both single-molecule data and FCS data, and confirm with experimental data. Fitting the diffraction-limited point spread function (PSF) of an isolated fluorescent marker to localize a labeled biomolecule is a critical step in many single-molecule tracking experiments. Flyvbjerg et al. [2] have rigorously pointed out some important drawbacks of the prevalent practice of fitting diffraction-limited PSF with 2D Gaussian. As we try to achieve more accurate and precise localization of biomolecules, we need to consider subtle points as mentioned by Flyvbjerg et al. Shepherd [3] has mentioned specific examples of PSF that have been used for localization and has rightly mentioned the importance of detector noise in single-molecule localization. Meroz [4] has pointed out more clearly that the signal itself could be noisy and it is necessary to distinguish the noise of interest from the background noise. Krapf [5] has pointed out different origins of fluctuations in biomolecular systems and commented on their possible Gaussian and non-Gaussian nature. Importance of noise along with the possibility that the noise itself can be the signal of interest has been discussed in our paper [6]. However, Meroz [4] and Krapf [5] have provided specific examples to guide the readers in a better way. Sachs et al. [7] have discussed kinetic analysis in the presence of indistinguishable states and have pointed to the free software for the general kinetic analysis that originated from their research.
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
Dasgupta, Purnendu K
2008-12-05
Resolution of overlapped chromatographic peaks is generally accomplished by modeling the peaks as Gaussian or modified Gaussian functions. It is possible, even preferable, to use actual single analyte input responses for this purpose and a nonlinear least squares minimization routine such as that provided by Microsoft Excel Solver can then provide the resolution. In practice, the quality of the results obtained varies greatly due to small shifts in retention time. I show here that such deconvolution can be considerably improved if one or more of the response arrays are iteratively shifted in time.
Revealing nonclassicality beyond Gaussian states via a single marginal distribution
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-01
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential—a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis–independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement. PMID:28077456
Revealing nonclassicality beyond Gaussian states via a single marginal distribution.
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-31
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential-a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis-independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement.
Resource theory of non-Gaussian operations
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.
2018-05-01
Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians
NASA Astrophysics Data System (ADS)
del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo
1995-06-01
A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
How Gaussian can our Universe be?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabass, G.; Pajer, E.; Schmidt, F., E-mail: giovanni.cabass@roma1.infn.it, E-mail: e.pajer@uu.nl, E-mail: fabians@mpa-garching.mpg.de
Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happenmore » to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of k {sub ℓ}{sup 2}/ k {sub s} {sup 2}, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × ( n {sub s}−1).« less
Efficiency of single-particle engines
NASA Astrophysics Data System (ADS)
Proesmans, Karel; Driesen, Cedric; Cleuren, Bart; Van den Broeck, Christian
2015-09-01
We study the efficiency of a single-particle Szilard and Carnot engine. Within a first order correction to the quasistatic limit, the work distribution is found to be Gaussian and the correction factor to average work and efficiency only depends on the piston speed. The stochastic efficiency is studied for both models and the recent findings on efficiency fluctuations are confirmed numerically. Special features are revealed in the zero-temperature limit.
Distillation of squeezing from non-Gaussian quantum states.
Heersink, J; Marquardt, Ch; Dong, R; Filip, R; Lorenz, S; Leuchs, G; Andersen, U L
2006-06-30
We show that single copy distillation of squeezing from continuous variable non-Gaussian states is possible using linear optics and conditional homodyne detection. A specific non-Gaussian noise source, corresponding to a random linear displacement, is investigated experimentally. Conditioning the signal on a tap measurement, we observe probabilistic recovery of squeezing.
Gate sequence for continuous variable one-way quantum computation
Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi
2013-01-01
Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.
Weiss, M; Stedtler, C; Roberts, M S
1997-09-01
The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting
NASA Astrophysics Data System (ADS)
Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing
2018-02-01
Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.
NASA Astrophysics Data System (ADS)
Baglione, Enrico; Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano
2017-04-01
The fact that ruptures on the generating faults of large earthquakes are strongly heterogeneous has been demonstrated over the last few decades by a large number of studies. The effort to retrieve reliable finite-fault models (FFMs) for large earthquakes occurred worldwide, mainly by means of the inversion of different kinds of geophysical data, has been accompanied in the last years by the systematic collection and format homogenisation of the published/proposed FFMs for different earthquakes into specifically conceived databases, such as SRCMOD. The main aim of this study is to explore characteristic patterns of the slip distribution of large earthquakes, by using a subset of the FFMs contained in SRCMOD, covering events with moment magnitude equal or larger than 6 and occurred worldwide over the last 25 years. We focus on those FFMs that exhibit a single and clear region of high slip (i.e. a single asperity), which is found to represent the majority of the events. For these FFMs, it sounds reasonable to best-fit the slip model by means of a 2D Gaussian distributions. Two different methods are used (least-square and highest-similarity) and correspondingly two "best-fit" indexes are introduced. As a result, two distinct 2D Gaussian distributions for each FFM are obtained. To quantify how well these distributions are able to mimic the original slip heterogeneity, we calculate and compare the vertical displacements at the Earth surface in the near field induced by the original FFM slip, by an equivalent uniform-slip model, by a depth-dependent slip model, and by the two "best" Gaussian slip models. The coseismic vertical surface displacement is used as the metric for comparison. Results show that, on average, the best results are the ones obtained with 2D Gaussian distributions based on similarity index fitting. Finally, we restrict our attention to those single-asperity FFMs associated to earthquakes which generated tsunamis. We choose few events for which tsunami data (water level time series and/or run-up measurements) are available. Using the results mentioned above, for each chosen event the coseismic vertical displacement fields computed for different slip distributions are used as initial conditions for numerical tsunami simulations, performed by means of the shallow-water code UBO-TSUFD. The comparison of the numerical results for different initial conditions to the experimental data is presented and discussed. This study was funded in the frame of the EU Project called ASTARTE - "Assessment, STrategy And Risk Reduction for Tsunamis in Europe", Grant 603839, 7th FP (ENV.2013.6.4-3).
Vanishing of local non-Gaussianity in canonical single field inflation
NASA Astrophysics Data System (ADS)
Bravo, Rafael; Mooij, Sander; Palma, Gonzalo A.; Pradenas, Bastián
2018-05-01
We study the production of observable primordial local non-Gaussianity in two opposite regimes of canonical single field inflation: attractor (standard single field slow-roll inflation) and non attractor (ultra slow-roll inflation). In the attractor regime, the standard derivation of the bispectrum's squeezed limit using co-moving coordinates gives the well known Maldacena's consistency relation fNL = 5 (1‑ns) / 12. On the other hand, in the non-attractor regime, the squeezed limit offers a substantial violation of this relation given by fNL = 5/2. In this work we argue that, independently of whether inflation is attractor or non-attractor, the size of the observable primordial local non-Gaussianity is predicted to be fNLobs = 0 (a result that was already understood to hold in the case of attractor models). To show this, we follow the use of the so-called Conformal Fermi Coordinates (CFC), recently introduced in the literature. These coordinates parametrize the local environment of inertial observers in a perturbed FRW spacetime, allowing one to identify and compute gauge invariant quantities, such as n-point correlation functions. Concretely, we find that during inflation, after all the modes have exited the horizon, the squeezed limit of the 3-point correlation function of curvature perturbations vanishes in the CFC frame, regardless of the inflationary regime. We argue that such a cancellation should persist after inflation ends.
NASA Astrophysics Data System (ADS)
Hmood, Jassim K.; Harun, Sulaiman W.
2018-05-01
A new approach for realizing a wideband optical frequency comb (OFC) generator based on driving cascaded modulators by a Gaussian-shaped waveform, is proposed and numerically demonstrated. The setup includes N-cascaded MZMs, a single Gaussian-shaped waveform generator, and N-1 electrical time delayer. The first MZM is driven directly by a Gaussian-shaped waveform, while delayed replicas of the Gaussian-shaped waveform drive the other MZMs. An analytical model that describes the proposed OFC generator is provided to study the effect of number and chirp factor of cascaded MZM as well as pulse width on output spectrum. Optical frequency combs at frequency spacing of 1 GHz are generated by applying Gaussian-shaped waveform at pulse widths ranging from 200 to 400 ps. Our results reveal that, the number of comb lines is inversely proportional to the pulse width and directly proportional to both number and chirp factor of cascaded MZMs. At pulse width of 200 ps and chirp factor of 4, 67 frequency lines can be measured at output spectrum of two-cascaded MZMs setup. Whereas, increasing the number of cascaded stages to 3, 4, and 5, the optical spectra counts 89, 109 and 123 frequency lines; respectively. When the delay time is optimized, 61 comb lines can be achieved with power fluctuations of less than 1 dB for five-cascaded MZMs setup.
The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments
NASA Astrophysics Data System (ADS)
Chen, Fajing; Jiao, Meiyan; Chen, Jing
2013-04-01
Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.
NASA Astrophysics Data System (ADS)
Wen, Xian-Huan; Gómez-Hernández, J. Jaime
1998-03-01
The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Time-resolved measurements of statistics for a Nd:YAG laser.
Hubschmid, W; Bombach, R; Gerber, T
1994-08-20
Time-resolved measurements of the fluctuating intensity of a multimode frequency-doubled Nd:YAG laser have been performed. For various operating conditions the enhancement factors in nonlinear optical processes that use a fluctuating instead of a single-mode laser have been determined up to the sixth order. In the case of reduced flash-lamp excitation and a switched-off laser amplifier, the intensity fluctuations agree with the normalized Gaussian model for the fluctuations of the fundamental frequency, whereas strong deviations are found under usual operating conditions. The frequencydoubled light has in the latter case enhancement factors not so far from values of Gaussian statistics.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
Clustering fossils in solid inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir
In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less
Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.
de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos
2011-01-01
In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Matteo A. C., E-mail: matteo.rossi@unimi.it; Paris, Matteo G. A., E-mail: matteo.paris@fisica.unimi.it; CNISM, Unità Milano Statale, I-20133 Milano
2016-01-14
We address the interaction of single- and two-qubit systems with an external transverse fluctuating field and analyze in detail the dynamical decoherence induced by Gaussian noise and random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent von Neumann equation, we analyze in detail the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e., for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where themore » two noises lead to very similar dynamics. We show that the effects of RTN noise and of Gaussian noise are different, i.e., the spectrum alone is not enough to summarize the noise effects, but the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.« less
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
High Precision Edge Detection Algorithm for Mechanical Parts
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Propagation of a Gaussian-beam wave in general anisotropic turbulence
NASA Astrophysics Data System (ADS)
Andrews, L. C.; Phillips, R. L.; Crabbs, R.
2014-10-01
Mathematical models for a Gaussian-beam wave propagating through anisotropic non-Kolmogorov turbulence have been developed in the past by several researchers. In previous publications, the anisotropic spatial power spectrum model was based on the assumption that propagation was in the z direction with circular symmetry maintained in the orthogonal xy-plane throughout the path. In the present analysis, however, the anisotropic spectrum model is no longer based on a single anisotropy parameter—instead, two such parameters are introduced in the orthogonal xyplane so that circular symmetry in this plane is no longer required. In addition, deviations from the 11/3 power-law behavior in the spectrum model are allowed by assuming power-law index variations 3 < α < 4 . In the current study we develop theoretical models for beam spot size, spatial coherence, and scintillation index that are valid in weak irradiance fluctuation regimes as well as in deep turbulence, or strong irradiance fluctuation regimes. These new results are compared with those derived from the more specialized anisotropic spectrum used in previous analyses.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
A non-gaussian model of continuous atmospheric turbulence for use in aircraft design
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Joppa, R. G.; Ganzer, V. M.
1976-01-01
A non-Gaussian model of atmospheric turbulence is presented and analyzed. The model is restricted to the regions of the atmosphere where the turbulence is steady or continuous, and the assumptions of homogeneity and stationarity are justified. Also spatial distribution of turbulence is neglected, so the model consists of three independent, stationary stochastic processes which represent the vertical, lateral, and longitudinal gust components. The non-Gaussian and Gaussian models are compared with experimental data, and it is shown that the Gaussian model underestimates the number of high velocity gusts which occur in the atmosphere, while the non-Gaussian model can be adjusted to match the observed high velocity gusts more satisfactorily. Application of the proposed model to aircraft response is investigated, with particular attention to the response power spectral density, the probability distribution, and the level crossing frequency. A numerical example is presented which illustrates the application of the non-Gaussian model to the study of an aircraft autopilot system. Listings and sample results of a number of computer programs used in working with the model are included.
Truncated Gaussians as tolerance sets
NASA Technical Reports Server (NTRS)
Cozman, Fabio; Krotkov, Eric
1994-01-01
This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
A path integral approach to the Hodgkin-Huxley model
NASA Astrophysics Data System (ADS)
Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando
2017-11-01
To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.
Polynomial approximation of non-Gaussian unitaries by counting one photon at a time
NASA Astrophysics Data System (ADS)
Arzani, Francesco; Treps, Nicolas; Ferrini, Giulia
2017-05-01
In quantum computation with continuous-variable systems, quantum advantage can only be achieved if some non-Gaussian resource is available. Yet, non-Gaussian unitary evolutions and measurements suited for computation are challenging to realize in the laboratory. We propose and analyze two methods to apply a polynomial approximation of any unitary operator diagonal in the amplitude quadrature representation, including non-Gaussian operators, to an unknown input state. Our protocols use as a primary non-Gaussian resource a single-photon counter. We use the fidelity of the transformation with the target one on Fock and coherent states to assess the quality of the approximate gate.
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Ultrasound beam transmission using a discretely orthogonal Gaussian aperture basis
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2018-04-01
Work is reported on development of a computational model for ultrasound beam transmission at an arbitrary geometry transmission interface for generally anisotropic materials. The work addresses problems encountered when the fundamental assumptions of ray theory do not hold, thereby introducing errors into ray-theory-based transmission models. Specifically, problems occur when the asymptotic integral analysis underlying ray theory encounters multiple stationary phase points in close proximity, due to focusing caused by concavity on either the entry surface or a material slowness surface. The approach presented here projects integrands over both the transducer aperture and the entry surface beam footprint onto a Gaussian-derived basis set, thereby distributing the integral over a summation of second-order phase integrals which are amenable to single stationary phase point analysis. Significantly, convergence is assured provided a sufficiently fine distribution of basis functions is used.
The area of isodensity contours in cosmological models and galaxy surveys
NASA Technical Reports Server (NTRS)
Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.
NASA Astrophysics Data System (ADS)
Arutyunyan, R. V.; Baranov, V. Yu; Bol'shov, Leonid A.; Dolgov, V. A.; Malyuta, D. D.; Mezhevov, V. S.; Semak, V. V.
1988-03-01
An experimental investigation was made of the dynamics of the loss of the melt as a result of interaction with single-mode CO2 laser radiation pulses of 5-35 μs duration. The dynamics of splashing of the melt during irradiation with short pulses characterized by a Gaussian intensity distribution differed from that predicted by models in which the distribution of the vapor pressure was assumed to be radially homogeneous.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
Theoretical analysis of non-Gaussian heterogeneity effects on subsurface flow and transport
NASA Astrophysics Data System (ADS)
Riva, Monica; Guadagnini, Alberto; Neuman, Shlomo P.
2017-04-01
Much of the stochastic groundwater literature is devoted to the analysis of flow and transport in Gaussian or multi-Gaussian log hydraulic conductivity (or transmissivity) fields, Y(x)=ln\\func K(x) (x being a position vector), characterized by one or (less frequently) a multiplicity of spatial correlation scales. Yet Y and many other variables and their (spatial or temporal) increments, ΔY, are known to be generally non-Gaussian. One common manifestation of non-Gaussianity is that whereas frequency distributions of Y often exhibit mild peaks and light tails, those of increments ΔY are generally symmetric with peaks that grow sharper, and tails that become heavier, as separation scale or lag between pairs of Y values decreases. A statistical model that captures these disparate, scale-dependent distributions of Y and ΔY in a unified and consistent manner has been recently proposed by us. This new "generalized sub-Gaussian (GSG)" model has the form Y(x)=U(x)G(x) where G(x) is (generally, but not necessarily) a multiscale Gaussian random field and U(x) is a nonnegative subordinator independent of G. The purpose of this paper is to explore analytically, in an elementary manner, lead-order effects that non-Gaussian heterogeneity described by the GSG model have on the stochastic description of flow and transport. Recognizing that perturbation expansion of hydraulic conductivity K=eY diverges when Y is sub-Gaussian, we render the expansion convergent by truncating Y's domain of definition. We then demonstrate theoretically and illustrate by way of numerical examples that, as the domain of truncation expands, (a) the variance of truncated Y (denoted by Yt) approaches that of Y and (b) the pdf (and thereby moments) of Yt increments approach those of Y increments and, as a consequence, the variogram of Yt approaches that of Y. This in turn guarantees that perturbing Kt=etY to second order in σYt (the standard deviation of Yt) yields results which approach those we obtain upon perturbing K=eY to second order in σY even as the corresponding series diverges. Our analysis is rendered mathematically tractable by considering mean-uniform steady state flow in an unbounded, two-dimensional domain of mildly heterogeneous Y with a single-scale function G having an isotropic exponential covariance. Results consist of expressions for (a) lead-order autocovariance and cross-covariance functions of hydraulic head, velocity, and advective particle displacement and (b) analogues of preasymptotic as well as asymptotic Fickian dispersion coefficients. We compare these theoretically and graphically with corresponding expressions developed in the literature for Gaussian Y. We find the former to differ from the latter by a factor k =
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide
2005-08-15
We study the measurement-induced non-Gaussian operation on the single- and two-mode Gaussian squeezed vacuum states with beam splitters and on-off type photon detectors, with which mixed non-Gaussian states are generally obtained in the conditional process. It is known that the entanglement can be enhanced via this non-Gaussian operation on the two-mode squeezed vacuum state. We show that, in the range of practical squeezing parameters, the conditional outputs are still close to Gaussian states, but their second order variances of quantum fluctuations and correlations are effectively suppressed and enhanced, respectively. To investigate an operational meaning of these states, especially entangled states,more » we also evaluate the quantum dense coding scheme from the viewpoint of the mutual information, and we show that non-Gaussian entangled state can be advantageous compared with the original two-mode squeezed state.« less
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
NASA Astrophysics Data System (ADS)
Mashhadi, L.
2017-12-01
Optical vortices are currently one of the most intensively studied topics in light-matter interaction. In this work, a three-step axial Doppler- and recoil-free Gaussian-Gaussian-Laguerre-Gaussian (GGLG) excitation of a localized atom to the highly excited Rydberg state is presented. By assuming a large detuning for intermediate states, an effective quadrupole excitation related to the Laguerre-Gaussian (LG) excitation to the highly excited Rydberg state is obtained. This special excitation system radially confines the single highly excited Rydberg atom independently of the trapping system into a sharp potential landscape into the so-called ‘far-off-resonance optical dipole-quadrupole trap’ (FORDQT). The key parameters of the Rydberg excitation to the highly excited state, namely the effective Rabi frequency and the effective detuning including a position-dependent AC Stark shift, are calculated in terms of the basic parameters of the LG beam and of the polarization of the excitation lasers. It is shown that the obtained parameters can be tuned to have a precise excitation of a single atom to the desired Rydberg state as well. The features of transferring the optical orbital and spin angular momentum of the polarized LG beam to the atom via quadrupole Rydberg excitation offer a long-lived and controllable qudit quantum memory. In addition, in contrast to the Gaussian laser beam, the doughnut-shaped LG beam makes it possible to use a high intensity laser beam to increase the signal-to-noise ratio in quadrupole excitation with minimized perturbations coming from stray light broadening in the last Rydberg excitation process.
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
Probing the cosmological initial conditions using the CMB
NASA Astrophysics Data System (ADS)
Yadav, Amit P. S.
In the last few decades, advances in observational cosmology have given us a standard model of cosmology. The basic cosmological parameters have been laid out to high precision. Cosmologists have started asking questions about the nature of the cosmological initial conditions. Many ambitious experiments such as Planck satellite, EBEX, ACT, CAPMAP, QUaD, BICEP, SPIDER, QUIET, and GEM are underway. Experiments like these will provide us with a wealth of information about CMB polarization, CMB lensing, and polarization foregrounds. These experiments will be complemented with great observational campaigns to map the 3D structure in the Universe and new particle physics constraints from the Large Hadron Collider. In my graduate work I have made explicit how observations of the CMB temperature and E-polarization anisotropies can be combined to provide optimal constraints on models of the early universe at the highest energies. I have developed new ways of constraining models of the early universe using CMB temperature and polarization data. Inflation is one of the most promising theories of the early universe. Different inflationary models predict different amounts of non-Gaussian perturbations. Although any non-Gaussianity predicted by the canonical inflation model is very small, there exist models which can generate significant amounts of non-Gaussianities. Hence any characterization of non-Gaussianity of the primordial perturbations constrains the models of inflation. The information in the bispectrum (or higher order moments) is completely independent of the power spectrum constraints on the amplitude of primordial power spectrum (A), the scalar spectral index of the primordial power spectrum ns, and the running of the primordial power spectrum. My work has made it possible to extract the bispectrum information from large, high resolution CMB temperature and polarization data. We have demonstrated that the primordial adiabatic perturbations can be reconstructed using CMB temperature and E-polarization information (Yadav and Wandelt 2005). One of the main motivations of reconstructing the primordial perturbations is to study the primordial non-Gaussianities. Since the amplitude of primordial non-Gaussianity is very small, any enhancement in sensitivity to the primordial features is useful because it improves the characterization of the primordial non-Gaussianity. Our reconstruction allows us to be more sensitive to the primordial features, whereas most of the current probes of non-Gaussianity do not specifically select for them. We have also developed a fast cubic (bispectrum) estimator of non-Gaussianity f NL of local type, using combined temperature and E-polarization data (Yadavet al. 2007). The estimator is computationally efficient, scaling as O( N 3/2 ) compared to the O( N 5/2 ) scaling of the brute force bispectrum calculation for sky maps with N pixels. For the Planck satellite, this translates into a speed-up by factors of millions, reducing the required computing time from thousands of years to just hours and thus making f NL estimation feasible. The speed of our estimator allows us to study its statistical properties using Monte Carlo simulations. Our estimator in its original form was optimal for homogeneous noise. In order to apply our estimator to realistic data, the estimator needed to be able to deal with inhomogeneous noise. We have generalized the fast polarized estimator to deal with inhomogeneous noise. The generalized estimator is also computationally efficient, scaling as O( N 3/2 ). Furthermore, we have studied and characterized our estimators in the presence of realistic noise, finite resolution, incomplete sky-coverage, and using non-Gaussian CMB maps (Yadavet al. 2008a). We have also developed a numerical code to generate CMB temperature and polarization non-Gaussian maps starting from a given primordial non-Gaussianity (f NL ) (Liguori et al. 2007). In the process of non-Gaussian CMB map making, the code also generates corresponding non-Gaussian primordial curvature perturbations. We use these curvature perturbations to quantify the quality of the tomographic reconstruction method described in (Yadav and Wandelt 2005). We check whether the tomographic reconstruction method preserves the non-Gaussian features, especially the phase information, in the reconstructed curvature perturbations (Yadav et al. in preparation). Finally, using our estimator we found (Yadav and Wandelt 2008) evidence for primordial non-Gaussianity of the local type (f NL ) in the temperature anisotropy of the Cosmic Microwave Background. Analyzing the bispectrum of the WMAP 3-year data up to l max =750 we find 27< f NL <147 (95% CL). This amounts to a rejection of f NL =0 at 2.8s, disfavoring canonical single field slow-roll inflation. The signal is robust to variations in l max , frequency, and masks. No known foreground, instrument systematic, or secondary anisotropy explains it. We explore the impact of several analysis choices on the quoted significance and find 2.5s to be conservative.
NASA Astrophysics Data System (ADS)
Otsuka, Kenju; Nemoto, Kana; Kamikariya, Koji; Miyasaka, Yoshihiko; Chu, Shu-Chun
2007-09-01
Detailed oscillation spectra and polarization properties have been examined in laser-diode-pumped (LD-pumped) microchip ceramic (i.e., polycrystalline) Nd:YAG lasers and the inherent segregation of lasing patterns into local modes possessing different polarization states was observed. Single-frequency linearly-polarized stable oscillations were realized by forcing the laser to Ince-Gaussian mode operations by adjusting azimuthal cavity symmetry.
Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Steinwolf, Alexander
2005-01-01
The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
NASA Astrophysics Data System (ADS)
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.
[S IV] in the NGC 5253 Supernebula: Ionized Gas Kinematics at High Resolution
NASA Astrophysics Data System (ADS)
Beck, Sara C.; Lacy, John H.; Turner, Jean L.; Kruger, Andrew; Richter, Matt; Crosthwaite, Lucian P.
2012-08-01
The nearby dwarf starburst galaxy NGC 5253 hosts a deeply embedded radio-infrared supernebula excited by thousands of O stars. We have observed this source in the 10.5 μm line of S +3 at 3.8 km s-1 spectral and 1farcs4 spatial resolution, using the high-resolution spectrometer TEXES on the IRTF. The line profile cannot be fit well by a single Gaussian. The best simple fit describes the gas with two Gaussians, one near the galactic velocity with FWHM 33.6 km s-1 and another of similar strength and FWHM 94 km s-1 centered ~20 km s-1 to the blue. This suggests a model for the supernebula in which gas flows toward us out of the molecular cloud, as in a "blister" or "champagne flow" or in the H II regions modelled by Zhu.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
Effective theory of squeezed correlation functions
NASA Astrophysics Data System (ADS)
Mirbabayi, Mehrdad; Simonović, Marko
2016-03-01
Various inflationary scenarios can often be distinguished from one another by looking at the squeezed limit behavior of correlation functions. Therefore, it is useful to have a framework designed to study this limit in a more systematic and efficient way. We propose using an expansion in terms of weakly coupled super-horizon degrees of freedom, which is argued to generically exist in a near de Sitter space-time. The modes have a simple factorized form which leads to factorization of the squeezed-limit correlation functions with power-law behavior in klong/kshort. This approach reproduces the known results in single-, quasi-single-, and multi-field inflationary models. However, it is applicable even if, unlike the above examples, the additional degrees of freedom are not weakly coupled at sub-horizon scales. Stronger results are derived in two-field (or sufficiently symmetric multi-field) inflationary models. We discuss the observability of the non-Gaussian 3-point function in the large-scale structure surveys, and argue that the squeezed limit behavior has a higher detectability chance than equilateral behavior when it scales as (klong/kshort)Δ with Δ < 1—where local non-Gaussianity corresponds to Δ = 0.
Langevin equation with fluctuating diffusivity: A two-state model
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji
2016-07-01
Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.
Light Scattering by Gaussian Particles: A Solution with Finite-Difference Time Domain Technique
NASA Technical Reports Server (NTRS)
Sun, W.; Nousiainen, T.; Fu, Q.; Loeb, N. G.; Videen, G.; Muinonen, K.
2003-01-01
The understanding of single-scattering properties of complex ice crystals has significance in atmospheric radiative transfer and remote-sensing applications. In this work, light scattering by irregularly shaped Gaussian ice crystals is studied with the finite-difference time-domain (FDTD) technique. For given sample particle shapes and size parameters in the resonance region, the scattering phase matrices and asymmetry factors are calculated. It is found that the deformation of the particle surface can significantly smooth the scattering phase functions and slightly reduce the asymmetry factors. The polarization properties of irregular ice crystals are also significantly different from those of spherical cloud particles. These FDTD results could provide a reference for approximate light-scattering models developed for irregular particle shapes and can have potential applications in developing a much simpler practical light scattering model for ice clouds angular-distribution models and for remote sensing of ice clouds and aerosols using polarized light. (copyright) 2003 Elsevier Science Ltd. All rights reserved.
Temporal self-splitting of optical pulses
NASA Astrophysics Data System (ADS)
Ding, Chaoliang; Koivurova, Matias; Turunen, Jari; Pan, Liuzhan
2018-05-01
We present mathematical models for temporally and spectrally partially coherent pulse trains with Laguerre-Gaussian and Hermite-Gaussian Schell-model statistics as extensions of the standard Gaussian Schell model for pulse trains. We derive propagation formulas of both classes of pulsed fields in linearly dispersive media and in temporal optical systems. It is found that, in general, both types of fields exhibit time-domain self-splitting upon propagation. The Laguerre-Gaussian model leads to multiply peaked pulses, while the Hermite-Gaussian model leads to doubly peaked pulses, in the temporal far field (in dispersive media) or at the Fourier plane of a temporal system. In both model fields the character of the self-splitting phenomenon depends both on the degree of temporal and spectral coherence and on the power spectrum of the field.
Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas
2017-11-01
Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.
Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States.
Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas
2017-11-03
Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
NASA Astrophysics Data System (ADS)
Guo, Yongfeng; Shen, Yajun; Tan, Jianguo
2016-09-01
The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
Donner, K; Hemilä, S
1996-01-01
Difference-of-Gaussians (DOG) models for the receptive fields of retinal ganglion cells accurately predict linear responses to both periodic stimuli (typically moving sinusoidal gratings) and aperiodic stimuli (typically circular fields presented as square-wave pulses). While the relation of spatial organization to retinal anatomy has received considerable attention, temporal characteristics have been only loosely connected to retinal physiology. Here we integrate realistic photoreceptor response waveforms into the DOG model to clarify how far a single set of physiological parameters predict temporal aspects of linear responses to both periodic and aperiodic stimuli. Traditional filter-cascade models provide a useful first-order approximation of the single-photon response in photoreceptors. The absolute time scale of these, plus a time for retinal transmission, here construed as a fixed delay, are obtained from flash/step data. Using these values, we find that the DOG model predicts the main features of both the amplitude and phase response of linear cat ganglion cells to sinusoidal flicker. Where the simplest model formulation fails, it serves to reveal additional mechanisms. Unforeseen facts are the attenuation of low temporal frequencies even in pure center-type responses, and the phase advance of the response relative to the stimulus at low frequencies. Neither can be explained by any experimentally documented cone response waveform, but both would be explained by signal differentiation, e.g. in the retinal transmission pathway, as demonstrated at least in turtle retina.
Unitarily localizable entanglement of Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serafini, Alessio; Adesso, Gerardo; Illuminati, Fabrizio
2005-03-01
We consider generic (mxn)-mode bipartitions of continuous-variable systems, and study the associated bisymmetric multimode Gaussian states. They are defined as (m+n)-mode Gaussian states invariant under local mode permutations on the m-mode and n-mode subsystems. We prove that such states are equivalent, under local unitary transformations, to the tensor product of a two-mode state and of m+n-2 uncorrelated single-mode states. The entanglement between the m-mode and the n-mode blocks can then be completely concentrated on a single pair of modes by means of local unitary operations alone. This result allows us to prove that the PPT (positivity of the partial transpose)more » condition is necessary and sufficient for the separability of (m+n)-mode bisymmetric Gaussian states. We determine exactly their negativity and identify a subset of bisymmetric states whose multimode entanglement of formation can be computed analytically. We consider explicit examples of pure and mixed bisymmetric states and study their entanglement scaling with the number of modes.« less
Actin filaments growing against an elastic membrane: Effect of membrane tension
NASA Astrophysics Data System (ADS)
Sadhu, Raj Kumar; Chatterjee, Sakuntala
2018-03-01
We study the force generation by a set of parallel actin filaments growing against an elastic membrane. The elastic membrane tries to stay flat and any deformation from this flat state, either caused by thermal fluctuations or due to protrusive polymerization force exerted by the filaments, costs energy. We study two lattice models to describe the membrane dynamics. In one case, the energy cost is assumed to be proportional to the absolute magnitude of the height gradient (gradient model) and in the other case it is proportional to the square of the height gradient (Gaussian model). For the gradient model we find that the membrane velocity is a nonmonotonic function of the elastic constant μ and reaches a peak at μ =μ* . For μ <μ* the system fails to reach a steady state and the membrane energy keeps increasing with time. For the Gaussian model, the system always reaches a steady state and the membrane velocity decreases monotonically with the elastic constant ν for all nonzero values of ν . Multiple filaments give rise to protrusions at different regions of the membrane and the elasticity of the membrane induces an effective attraction between the two protrusions in the Gaussian model which causes the protrusions to merge and a single wide protrusion is present in the system. In both the models, the relative time scale between the membrane and filament dynamics plays an important role in deciding whether the shape of elasticity-velocity curve is concave or convex. Our numerical simulations agree reasonably well with our analytical calculations.
Arbitrage with fractional Gaussian processes
NASA Astrophysics Data System (ADS)
Zhang, Xili; Xiao, Weilin
2017-04-01
While the arbitrage opportunity in the Black-Scholes model driven by fractional Brownian motion has a long history, the arbitrage strategy in the Black-Scholes model driven by general fractional Gaussian processes is in its infancy. The development of stochastic calculus with respect to fractional Gaussian processes allowed us to study such models. In this paper, following the idea of Shiryaev (1998), an arbitrage strategy is constructed for the Black-Scholes model driven by fractional Gaussian processes, when the stochastic integral is interpreted in the Riemann-Stieltjes sense. Arbitrage opportunities in some fractional Gaussian processes, including fractional Brownian motion, sub-fractional Brownian motion, bi-fractional Brownian motion, weighted-fractional Brownian motion and tempered fractional Brownian motion, are also investigated.
Quantum state engineering of light with continuous-wave optical parametric oscillators.
Morin, Olivier; Liu, Jianli; Huang, Kun; Barbosa, Felippe; Fabre, Claude; Laurat, Julien
2014-05-30
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics(1,2). Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems(3). We focus here on the use of a continuous-wave optical parametric oscillator(3,4). This system is based on a non-linear χ(2) crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching. Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states(5). Generating directly such states is a difficult task and would require strong χ(3) non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
Planck 2015 results. XVII. Constraints on primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Arroja, F.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Heavens, A.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Münchmeyer, M.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peiris, H. V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Shiraishi, M.; Smith, K.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutter, P.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Troja, A.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
The Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators - separable template-fitting (KSW), binned, and modal - we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒlocalNL = 2.5 ± 5.7, ƒequilNL= -16 ± 70, , and ƒorthoNL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒlocalNL = 0.8 ± 5.0, ƒequilNL= -4 ± 43, and ƒorthoNL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the "look elsewhere" effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be 𝓰localNL = (-0.9 ± 7.7 ) X 104(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.
Planck 2015 results: XVII. Constraints on primordial non-Gaussianity
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
NASA Astrophysics Data System (ADS)
Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.
2017-12-01
The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Liu, Chengyu; Zheng, Dingchang; Zhao, Lina; Liu, Changchun
2014-01-01
It has been reported that Gaussian functions could accurately and reliably model both carotid and radial artery pressure waveforms (CAPW and RAPW). However, the physiological relevance of the characteristic features from the modeled Gaussian functions has been little investigated. This study thus aimed to determine characteristic features from the Gaussian functions and to make comparisons of them between normal subjects and heart failure patients. Fifty-six normal subjects and 51 patients with heart failure were studied with the CAPW and RAPW signals recorded simultaneously. The two signals were normalized first and then modeled by three positive Gaussian functions, with their peak amplitude, peak time, and half-width determined. Comparisons of these features were finally made between the two groups. Results indicated that the peak amplitude of the first Gaussian curve was significantly decreased in heart failure patients compared with normal subjects (P<0.001). Significantly increased peak amplitude of the second Gaussian curves (P<0.001) and significantly shortened peak times of the second and third Gaussian curves (both P<0.001) were also presented in heart failure patients. These results were true for both CAPW and RAPW signals, indicating the clinical significance of the Gaussian modeling, which should provide essential tools for further understanding the underlying physiological mechanisms of the artery pressure waveform.
Wang, Shu-Fan; Lai, Shang-Hong
2011-10-01
Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Röben, B., E-mail: roeben@pdi-berlin.de; Wienold, M.; Schrottke, L.
2016-06-15
The far-field distribution of the emission intensity of terahertz (THz) quantum-cascade lasers (QCLs) frequently exhibits multiple lobes instead of a single-lobed Gaussian distribution. We show that such multiple lobes can result from self-interference related to the typically large beam divergence of THz QCLs and the presence of an inevitable cryogenic operation environment including optical windows. We develop a quantitative model to reproduce the multiple lobes. We also demonstrate how a single-lobed far-field distribution can be achieved.
Vyas, Manan; Kota, V K B; Chavda, N D
2010-03-01
Finite interacting Fermi systems with a mean-field and a chaos generating two-body interaction are modeled by one plus two-body embedded Gaussian orthogonal ensemble of random matrices with spin degree of freedom [called EGOE(1+2)-s]. Numerical calculations are used to demonstrate that, as lambda , the strength of the interaction (measured in the units of the average spacing of the single-particle levels defining the mean-field), increases, generically there is Poisson to GOE transition in level fluctuations, Breit-Wigner to Gaussian transition in strength functions (also called local density of states) and also a duality region where information entropy will be the same in both the mean-field and interaction defined basis. Spin dependence of the transition points lambda_{c} , lambdaF, and lambdad , respectively, is described using the propagator for the spectral variances and the formula for the propagator is derived. We further establish that the duality region corresponds to a region of thermalization. For this purpose we compared the single-particle entropy defined by the occupancies of the single-particle orbitals with thermodynamic entropy and information entropy for various lambda values and they are very close to each other at lambda=lambdad.
Acoustical tweezers using single spherically focused piston, X-cut, and Gaussian beams.
Mitri, Farid G
2015-10-01
Partial-wave series expansions (PWSEs) satisfying the Helmholtz equation in spherical coordinates are derived for circular spherically focused piston (i.e., apodized by a uniform velocity amplitude normal to its surface), X-cut (i.e., apodized by a velocity amplitude parallel to the axis of wave propagation), and Gaussian (i.e., apodized by a Gaussian distribution of the velocity amplitude) beams. The Rayleigh-Sommerfeld diffraction integral and the addition theorems for the Legendre and spherical wave functions are used to obtain the PWSEs assuming weakly focused beams (with focusing angle α ⩽ 20°) in the Fresnel-Kirchhoff (parabolic) approximation. In contrast with previous analytical models, the derived expressions allow computing the scattering and acoustic radiation force from a sphere of radius a without restriction to either the Rayleigh (a ≪ λ, where λ is the wavelength of the incident radiation) or the ray acoustics (a ≫λ) regimes. The analytical formulations are valid for wavelengths largely exceeding the radius of the focused acoustic radiator, when the viscosity of the surrounding fluid can be neglected, and when the sphere is translated along the axis of wave propagation. Computational results illustrate the analysis with particular emphasis on the sphere's elastic properties and the axial distance to the center of the concave surface, with close connection of the emergence of negative trapping forces. Potential applications are in single-beam acoustical tweezers, acoustic levitation, and particle manipulation.
MSEE: Stochastic Cognitive Linguistic Behavior Models for Semantic Sensing
2013-09-01
recognition, a Gaussian Process Dynamic Model with Social Network Analysis (GPDM-SNA) for a small human group action recognition, an extended GPDM-SNA...44 3.2. Small Human Group Activity Modeling Based on Gaussian Process Dynamic Model and Social Network Analysis (SN-GPDM...51 Approved for public release; distribution unlimited. 3 3.2.3. Gaussian Process Dynamical Model and
Axial acoustic radiation force on a sphere in Gaussian field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Rongrong; Liu, Xiaozhou, E-mail: xzliu@nju.edu.cn; Gong, Xiufen
2015-10-28
Based on the finite series method, the acoustical radiation force resulting from a Gaussian beam incident on a spherical object is investigated analytically. When the position of the particles deviating from the center of the beam, the Gaussian beam is expanded as a spherical function at the center of the particles and the expanded coefficients of the Gaussian beam is calculated. The analytical expression of the acoustic radiation force on spherical particles deviating from the Gaussian beam center is deduced. The acoustic radiation force affected by the acoustic frequency and the offset distance from the Gaussian beam center is investigated.more » Results have been presented for Gaussian beams with different wavelengths and it has been shown that the interaction of a Gaussian beam with a sphere can result in attractive axial force under specific operational conditions. Results indicate the capability of manipulating and separating spherical spheres based on their mechanical and acoustical properties, the results provided here may provide a theoretical basis for development of single-beam acoustical tweezers.« less
Kota, V K B; Chavda, N D; Sahu, R
2006-04-01
Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.
Generation of singular optical beams from fundamental Gaussian beam using Sagnac interferometer
NASA Astrophysics Data System (ADS)
Naik, Dinesh N.; Viswanathan, Nirmal K.
2016-09-01
We propose a simple free-space optics recipe for the controlled generation of optical vortex beams with a vortex dipole or a single charge vortex, using an inherently stable Sagnac interferometer. We investigate the role played by the amplitude and phase differences in generating higher-order Gaussian beams from the fundamental Gaussian mode. Our simulation results reveal how important the control of both the amplitude and the phase difference between superposing beams is to achieving optical vortex beams. The creation of a vortex dipole from null interference is unveiled through the introduction of a lateral shear and a radial phase difference between two out-of-phase Gaussian beams. A stable and high quality optical vortex beam, equivalent to the first-order Laguerre-Gaussian beam, is synthesized by coupling lateral shear with linear phase difference, introduced orthogonal to the shear between two out-of-phase Gaussian beams.
Radiation pressure acceleration of corrugated thin foils by Gaussian and super-Gaussian beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adusumilli, K.; Goyal, D.; Tripathi, V. K.
Rayleigh-Taylor instability of radiation pressure accelerated ultrathin foils by laser having Gaussian and super-Gaussian intensity distribution is investigated using a single fluid code. The foil is allowed to have ring shaped surface ripples. The radiation pressure force on such a foil is non-uniform with finite transverse component F{sub r}; F{sub r} varies periodically with r. Subsequently, the ripple grows as the foil moves ahead along z. With a Gaussian beam, the foil acquires an overall curvature due to non-uniformity in radiation pressure and gets thinner. In the process, the ripple perturbation is considerably washed off. With super-Gaussian beam, the ripplemore » is found to be more strongly washed out. In order to avoid transmission of the laser through the thinning foil, a criterion on the foil thickness is obtained.« less
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong
2016-12-09
Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2 = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiurasek, Jaromir; Cerf, Nicolas J.
We investigate the asymmetric Gaussian cloning of coherent states which produces M copies from N input replicas in such a way that the fidelity of each copy may be different. We show that the optimal asymmetric Gaussian cloning can be performed with a single phase-insensitive amplifier and an array of beam splitters. We obtain a simple analytical expression characterizing the set of optimal asymmetric Gaussian cloning machines and prove the optimality of these cloners using the formalism of Gaussian completely positive maps and semidefinite programming techniques. We also present an alternative implementation of the asymmetric cloning machine where the phase-insensitivemore » amplifier is replaced with a beam splitter, heterodyne detector, and feedforward.« less
Evaluation of non-Gaussian diffusion in cardiac MRI.
McClymont, Darryl; Teh, Irvin; Carruth, Eric; Omens, Jeffrey; McCulloch, Andrew; Whittington, Hannah J; Kohl, Peter; Grau, Vicente; Schneider, Jürgen E
2017-09-01
The diffusion tensor model assumes Gaussian diffusion and is widely applied in cardiac diffusion MRI. However, diffusion in biological tissue deviates from a Gaussian profile as a result of hindrance and restriction from cell and tissue microstructure, and may be quantified better by non-Gaussian modeling. The aim of this study was to investigate non-Gaussian diffusion in healthy and hypertrophic hearts. Thirteen rat hearts (five healthy, four sham, four hypertrophic) were imaged ex vivo. Diffusion-weighted images were acquired at b-values up to 10,000 s/mm 2 . Models of diffusion were fit to the data and ranked based on the Akaike information criterion. The diffusion tensor was ranked best at b-values up to 2000 s/mm 2 but reflected the signal poorly in the high b-value regime, in which the best model was a non-Gaussian "beta distribution" model. Although there was considerable overlap in apparent diffusivities between the healthy, sham, and hypertrophic hearts, diffusion kurtosis and skewness in the hypertrophic hearts were more than 20% higher in the sheetlet and sheetlet-normal directions. Non-Gaussian diffusion models have a higher sensitivity for the detection of hypertrophy compared with the Gaussian model. In particular, diffusion kurtosis may serve as a useful biomarker for characterization of disease and remodeling in the heart. Magn Reson Med 78:1174-1186, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
BINGO: a code for the efficient computation of the scalar bi-spectrum
NASA Astrophysics Data System (ADS)
Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme
2013-05-01
We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.
Hamby, D M
2002-01-01
Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.
NASA Astrophysics Data System (ADS)
Karagiannis, Dionysios; Lazanu, Andrei; Liguori, Michele; Raccanelli, Alvise; Bartolo, Nicola; Verde, Licia
2018-07-01
We forecast constraints on primordial non-Gaussianity (PNG) and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum and optical surveys. In the galaxy bispectrum, we consider a comprehensive list of effects, including the bias expansion for non-Gaussian initial conditions up to second order, redshift space distortions, redshift uncertainties and theoretical errors. These effects are all combined in a single PNG forecast for the first time. Moreover, we improve the bispectrum modelling over previous forecasts, by accounting for trispectrum contributions. All effects have an impact on final predicted bounds, which varies with the type of survey. We find that the bispectrum can lead to improvements up to a factor ˜5 over bounds based on the power spectrum alone, leading to significantly better constraints for local-type PNG, with respect to current limits from Planck. Future radio and photometric surveys could obtain a measurement error of σ (f_{NL}^{loc}) ≈ 0.2. In the case of equilateral PNG, galaxy bispectrum can improve upon present bounds only if significant improvements in the redshift determinations of future, large volume, photometric or radio surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are generally comparable to current ones.
NASA Astrophysics Data System (ADS)
Karagiannis, Dionysios; Lazanu, Andrei; Liguori, Michele; Raccanelli, Alvise; Bartolo, Nicola; Verde, Licia
2018-04-01
We forecast constraints on primordial non-Gaussianity (PNG) and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum and optical surveys. In the galaxy bispectrum, we consider a comprehensive list of effects, including the bias expansion for non-Gaussian initial conditions up to second order, redshift space distortions, redshift uncertainties and theoretical errors. These effects are all combined in a single PNG forecast for the first time. Moreover, we improve the bispectrum modelling over previous forecasts, by accounting for trispectrum contributions. All effects have an impact on final predicted bounds, which varies with the type of survey. We find that the bispectrum can lead to improvements up to a factor ˜5 over bounds based on the power spectrum alone, leading to significantly better constraints for local-type PNG, with respect to current limits from Planck. Future radio and photometric surveys could obtain a measurement error of σ (f_{NL}^{loc}) ≈ 0.2. In the case of equilateral PNG, galaxy bispectrum can improve upon present bounds only if significant improvements in the redshift determinations of future, large volume, photometric or radio surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are generally comparable to current ones.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
Zhang, Lifu; Li, Chuxin; Zhong, Haizhe; Xu, Changwen; Lei, Dajun; Li, Ying; Fan, Dianyuan
2016-06-27
We have investigated the propagation dynamics of super-Gaussian optical beams in fractional Schrödinger equation. We have identified the difference between the propagation dynamics of super-Gaussian beams and that of Gaussian beams. We show that, the linear propagation dynamics of the super-Gaussian beams with order m > 1 undergo an initial compression phase before they split into two sub-beams. The sub-beams with saddle shape separate each other and their interval increases linearly with propagation distance. In the nonlinear regime, the super-Gaussian beams evolve to become a single soliton, breathing soliton or soliton pair depending on the order of super-Gaussian beams, nonlinearity, as well as the Lévy index. In two dimensions, the linear evolution of super-Gaussian beams is similar to that for one dimension case, but the initial compression of the input super-Gaussian beams and the diffraction of the splitting beams are much stronger than that for one dimension case. While the nonlinear propagation of the super-Gaussian beams becomes much more unstable compared with that for the case of one dimension. Our results show the nonlinear effects can be tuned by varying the Lévy index in the fractional Schrödinger equation for a fixed input power.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Swelling of two-dimensional polymer rings by trapped particles.
Haleva, E; Diamant, H
2006-09-01
The mean area of a two-dimensional Gaussian ring of N monomers is known to diverge when the ring is subject to a critical pressure differential, p c ~ N -1. In a recent publication (Eur. Phys. J. E 19, 461 (2006)) we have shown that for an inextensible freely jointed ring this divergence turns into a second-order transition from a crumpled state, where the mean area scales as [A]~N-1, to a smooth state with [A]~N(2). In the current work we extend these two models to the case where the swelling of the ring is caused by trapped ideal-gas particles. The Gaussian model is solved exactly, and the freely jointed one is treated using a Flory argument, mean-field theory, and Monte Carlo simulations. For a fixed number Q of trapped particles the criticality disappears in both models through an unusual mechanism, arising from the absence of an area constraint. In the Gaussian case the ring swells to such a mean area, [A]~ NQ, that the pressure exerted by the particles is at p c for any Q. In the freely jointed model the mean area is such that the particle pressure is always higher than p c, and [A] consequently follows a single scaling law, [A]~N(2) f (Q/N), for any Q. By contrast, when the particles are in contact with a reservoir of fixed chemical potential, the criticality is retained. Thus, the two ensembles are manifestly inequivalent in these systems.
'A device for being able to book P&L': the organizational embedding of the Gaussian copula.
MacKenzie, Donald; Spears, Taylor
2014-06-01
This article, the second of two articles on the Gaussian copula family of models, discusses the attitude of 'quants' (modellers) to these models, showing that contrary to some accounts, those quants were not 'model dopes' who uncritically accepted the outputs of the models. Although sometimes highly critical of Gaussian copulas - even 'othering' them as not really being models --they nevertheless nearly all kept using them, an outcome we explain with reference to the embedding of these models in inter- and intra-organizational processes: communication, risk control and especially the setting of bonuses. The article also examines the role of Gaussian copula models in the 2007-2008 global crisis and in a 2005 episode known as 'the correlation crisis'. We end with the speculation that all widely used derivatives models (and indeed the evaluation culture in which they are embedded) help generate inter-organizational co-ordination, and all that is special in this respect about the Gaussian copula is that its status as 'other' makes this role evident.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G
2017-07-01
This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Power spectrum and non-Gaussianities in anisotropic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, Anindya; Kovetz, Ely D.; Paban, Sonia, E-mail: anindya@physics.utexas.edu, E-mail: elykovetz@gmail.com, E-mail: paban@physics.utexas.edu
2014-06-01
We study the planar regime of curvature perturbations for single field inflationary models in an axially symmetric Bianchi I background. In a theory with standard scalar field action, the power spectrum for such modes has a pole as the planarity parameter goes to zero. We show that constraints from back reaction lead to a strong lower bound on the planarity parameter for high-momentum planar modes and use this bound to calculate the signal-to-noise ratio of the anisotropic power spectrum in the CMB, which in turn places an upper bound on the Hubble scale during inflation allowed in our model. Wemore » find that non-Gaussianities for these planar modes are enhanced for the flattened triangle and the squeezed triangle configurations, but show that the estimated values of the f{sub NL} parameters remain well below the experimental bounds from the CMB for generic planar modes (other, more promising signatures are also discussed). For a standard action, f{sub NL} from the squeezed configuration turns out to be larger compared to that from the flattened triangle configuration in the planar regime. However, in a theory with higher derivative operators, non-Gaussianities from the flattened triangle can become larger than the squeezed configuration in a certain limit of the planarity parameter.« less
Edgeworth streaming model for redshift space distortions
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael; Haugg, Thomas
2015-09-01
We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise
NASA Astrophysics Data System (ADS)
Guo, Qin; Sun, Zhongkui; Xu, Wei
2016-05-01
The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
A new algorithm for ECG interference removal from single channel EMG recording.
Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein
2017-09-01
This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
Gaussian process based independent analysis for temporal source separation in fMRI.
Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole
2017-05-15
Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its temporal nature. fMRI source signals, biological stimuli and non-stimuli-related artifacts are all smooth over a time-scale compatible with the sampling time (TR). We therefore propose Gaussian process ICA (GPICA), which facilitates temporal dependency by the use of Gaussian process source priors. On two fMRI data sets with different sampling frequency, we show that the GPICA-inferred temporal components and associated spatial maps allow for a more definite interpretation than standard temporal ICA methods. The temporal structures of the sources are controlled by the covariance of the Gaussian process, specified by a kernel function with an interpretable and controllable temporal length scale parameter. We propose a hierarchical model specification, considering both instantaneous and convolutive mixing, and we infer source spatial maps, temporal patterns and temporal length scale parameters by Markov Chain Monte Carlo. A companion implementation made as a plug-in for SPM can be downloaded from https://github.com/dittehald/GPICA. Copyright © 2017 Elsevier Inc. All rights reserved.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Generalized expression for optical source fields
NASA Astrophysics Data System (ADS)
Kamacıoğlu, Canan; Baykal, Yahya
2012-09-01
A generalized optical beam expression is developed that presents the majority of the existing optical source fields such as Bessel, Laguerre-Gaussian, dark hollow, bottle, super Gaussian, Lorentz, super-Lorentz, flat-topped, Hermite-sinusoidal-Gaussian, sinusoidal-Gaussian, annular, Gauss-Legendre, vortex, also their higher order modes with their truncated, elegant and elliptical versions. Source intensity profiles derived from the generalized optical source beam fields are checked to match the intensity profiles of many individual known beam types. Source intensities for several interesting beam combinations are presented. Our generalized optical source beam field expression can be used to examine both the source characteristics and the propagation properties of many different optical beams in a single formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Ho, M; Chen, C
Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantommore » from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexanian, Moorad
The fidelity for cloning coherent states is improved over that provided by optimal Gaussian and non-Gaussian cloners for the subset of coherent states that are prepared with known phases. Gaussian quantum cloning duplicates all coherent states with an optimal fidelity of 2/3. Non-Gaussian cloners give optimal single-clone fidelity for a symmetric 1-to-2 cloner of 0.6826. Coherent states that have known phases can be cloned with a fidelity of 4/5. The latter is realized by a combination of two beam splitters and a four-wave mixer operated in the nonlinear regime, all of which are realized by interaction Hamiltonians that are quadraticmore » in the photon operators. Therefore, the known Gaussian devices for cloning coherent states are extended when cloning coherent states with known phases by considering a nonbalanced beam splitter at the input side of the amplifier.« less
Random medium model for cusping of plane waves.
Li, Jia; Korotkova, Olga
2017-09-01
We introduce a model for a three-dimensional (3D) Schell-type stationary medium whose degree of potential's correlation satisfies the Fractional Multi-Gaussian (FMG) function. Compared with the scattered profile produced by the Gaussian Schell-model (GSM) medium, the Fractional Multi-Gaussian Schell-model (FMGSM) medium gives rise to a sharp concave intensity apex in the scattered field. This implies that the FMGSM medium also accounts for a larger than Gaussian's power in the bucket (PIB) in the forward scattering direction, hence being a better candidate than the GSM medium for generating highly-focused (cusp-like) scattered profiles in the far zone. Compared to other mathematical models for the medium's correlation function which can produce similar cusped scattered profiles the FMG function offers unprecedented tractability being the weighted superposition of Gaussian functions. Our results provide useful applications to energy counter problems and particle manipulation by weakly scattered fields.
Non-Gaussian operations on bosonic modes of light: Photon-added Gaussian channels
NASA Astrophysics Data System (ADS)
Sabapathy, Krishna Kumar; Winter, Andreas
2017-06-01
We present a framework for studying bosonic non-Gaussian channels of continuous-variable systems. Our emphasis is on a class of channels that we call photon-added Gaussian channels, which are experimentally viable with current quantum-optical technologies. A strong motivation for considering these channels is the fact that it is compulsory to go beyond the Gaussian domain for numerous tasks in continuous-variable quantum information processing such as entanglement distillation from Gaussian states and universal quantum computation. The single-mode photon-added channels we consider are obtained by using two-mode beam splitters and squeezing operators with photon addition applied to the ancilla ports giving rise to families of non-Gaussian channels. For each such channel, we derive its operator-sum representation, indispensable in the present context. We observe that these channels are Fock preserving (coherence nongenerating). We then report two examples of activation using our scheme of photon addition, that of quantum-optical nonclassicality at outputs of channels that would otherwise output only classical states and of both the quantum and private communication capacities, hinting at far-reaching applications for quantum-optical communication. Further, we see that noisy Gaussian channels can be expressed as a convex mixture of these non-Gaussian channels. We also present other physical and information-theoretic properties of these channels.
Orthogonal Gaussian process models
Plumlee, Matthew; Joseph, V. Roshan
2017-01-01
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Orthogonal Gaussian process models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plumlee, Matthew; Joseph, V. Roshan
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J
2014-07-01
High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.
2014-01-01
Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation
NASA Technical Reports Server (NTRS)
Steinwolf, Alexander; Rizzi, Stephen A.
2003-01-01
The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.
NASA Astrophysics Data System (ADS)
Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing
2018-05-01
We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Density-based clustering analyses to identify heterogeneous cellular sub-populations
NASA Astrophysics Data System (ADS)
Heaster, Tiffany M.; Walsh, Alex J.; Landman, Bennett A.; Skala, Melissa C.
2017-02-01
Autofluorescence microscopy of NAD(P)H and FAD provides functional metabolic measurements at the single-cell level. Here, density-based clustering algorithms were applied to metabolic autofluorescence measurements to identify cell-level heterogeneity in tumor cell cultures. The performance of the density-based clustering algorithm, DENCLUE, was tested in samples with known heterogeneity (co-cultures of breast carcinoma lines). DENCLUE was found to better represent the distribution of cell clusters compared to Gaussian mixture modeling. Overall, DENCLUE is a promising approach to quantify cell-level heterogeneity, and could be used to understand single cell population dynamics in cancer progression and treatment.
Nonlinear single-spin spectrum analyzer.
Kotler, Shlomi; Akerman, Nitzan; Glickman, Yinnon; Ozeri, Roee
2013-03-15
Qubits have been used as linear spectrum analyzers of their environments. Here we solve the problem of nonlinear spectral analysis, required for discrete noise induced by a strongly coupled environment. Our nonperturbative analytical model shows a nonlinear signal dependence on noise power, resulting in a spectral resolution beyond the Fourier limit as well as frequency mixing. We develop a noise characterization scheme adapted to this nonlinearity. We then apply it using a single trapped ion as a sensitive probe of strong, non-Gaussian, discrete magnetic field noise. Finally, we experimentally compared the performance of equidistant vs Uhrig modulation schemes for spectral analysis.
On the Validity of Certain Approximations Used in the Modeling of Nuclear EMP
Farmer, William A.; Cohen, Bruce I.; Eng, Chester D.
2016-04-01
The legacy codes developed for the modeling of EMP, multiple scattering of Compton electrons has typically been modeled by the obliquity factor. A recent publication has examined this approximation in the context of the generated Compton current [W. A. Farmer and A. Friedman, IEEE Trans. Nucl. Sc. 62, 1695 (2015)]. Here, this previous analysis is extended to include the generation of the electromagnetic fields. Obliquity factor predictions are compared with Monte-Carlo models. In using a Monte-Carlo description of scattering, two distributions of scattering angles are considered: Gaussian and a Gaussian with a single-scattering tail. Additionally, legacy codes also neglect themore » radial derivative of the backward-traveling wave for computational efficiency. The neglect of this derivative improperly treats the backward-traveling wave. Moreover, these approximations are examined in the context of a high-altitude burst, and it is shown that in comparison to more complete models, the discrepancy between field amplitudes is roughly two to three percent and between rise-times, 10%. Finally, it is concluded that the biggest factor in determining the rise time of the signal is not the dynamics of the Compton current, but is instead the conductivity.« less
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States
NASA Technical Reports Server (NTRS)
Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)
1996-01-01
Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Flat-top beam for laser-stimulated pain
NASA Astrophysics Data System (ADS)
McCaughey, Ryan; Nadeau, Valerie; Dickinson, Mark
2005-04-01
One of the main problems during laser stimulation in human pain research is the risk of tissue damage caused by excessive heating of the skin. This risk has been reduced by using a laser beam with a flattop (or superGaussian) intensity profile, instead of the conventional Gaussian beam. A finite difference approximation to the heat conduction equation has been applied to model the temperature distribution in skin as a result of irradiation by flattop and Gaussian profile CO2 laser beams. The model predicts that a 15 mm diameter, 15 W, 100 ms CO2 laser pulse with an order 6 superGaussian profile produces a maximum temperature 6 oC less than a Gaussian beam with the same energy density. A superGaussian profile was created by passing a Gaussian beam through a pair of zinc selenide aspheric lenses which refract the more intense central region of the beam towards the less intense periphery. The profiles of the lenses were determined by geometrical optics. In human pain trials the superGaussian beam required more power than the Gaussian beam to reach sensory and pain thresholds.
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
MacKenzie, Donald; Spears, Taylor
2014-06-01
Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.
NASA Astrophysics Data System (ADS)
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.
2016-02-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial template model used.
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantum teleportation of nonclassical wave packets: An effective multimode theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki
2011-07-15
We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.
Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media
NASA Astrophysics Data System (ADS)
Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen
2018-06-01
A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Gaussian processes: a method for automatic QSAR modeling of ADME properties.
Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D
2007-01-01
In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.
On the constrained classical capacity of infinite-dimensional covariant quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holevo, A. S.
The additivity of the minimal output entropy and that of the χ-capacity are known to be equivalent for finite-dimensional irreducibly covariant quantum channels. In this paper, we formulate a list of conditions allowing to establish similar equivalence for infinite-dimensional covariant channels with constrained input. This is then applied to bosonic Gaussian channels with quadratic input constraint to extend the classical capacity results of the recent paper [Giovannetti et al., Commun. Math. Phys. 334(3), 1553-1571 (2015)] to the case where the complex structures associated with the channel and with the constraint operator need not commute. In particular, this implies a multimodemore » generalization of the “threshold condition,” obtained for single mode in Schäfer et al. [Phys. Rev. Lett. 111, 030503 (2013)], and the proof of the fact that under this condition the classical “Gaussian capacity” resulting from optimization over only Gaussian inputs is equal to the full classical capacity. Complex structures correspond to different squeezings, each with its own normal modes, vacuum and coherent states, and the gauge. Thus our results apply, e.g., to multimode channels with a squeezed Gaussian noise under the standard input energy constraint, provided the squeezing is not too large as to violate the generalized threshold condition. We also investigate the restrictiveness of the gauge-covariance condition for single- and multimode bosonic Gaussian channels.« less
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
The Herschel-ATLAS: magnifications and physical sizes of 500-μm-selected strongly lensed galaxies
NASA Astrophysics Data System (ADS)
Enia, A.; Negrello, M.; Gurwell, M.; Dye, S.; Rodighiero, G.; Massardi, M.; De Zotti, G.; Franceschini, A.; Cooray, A.; van der Werf, P.; Birkinshaw, M.; Michałowski, M. J.; Oteo, I.
2018-04-01
We perform lens modelling and source reconstruction of Sub-millimetre Array (SMA) data for a sample of 12 strongly lensed galaxies selected at 500μm in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). A previous analysis of the same data set used a single Sérsic profile to model the light distribution of each background galaxy. Here we model the source brightness distribution with an adaptive pixel scale scheme, extended to work in the Fourier visibility space of interferometry. We also present new SMA observations for seven other candidate lensed galaxies from the H-ATLAS sample. Our derived lens model parameters are in general consistent with previous findings. However, our estimated magnification factors, ranging from 3 to 10, are lower. The discrepancies are observed in particular where the reconstructed source hints at the presence of multiple knots of emission. We define an effective radius of the reconstructed sources based on the area in the source plane where emission is detected above 5σ. We also fit the reconstructed source surface brightness with an elliptical Gaussian model. We derive a median value reff ˜ 1.77 kpc and a median Gaussian full width at half-maximum ˜1.47 kpc. After correction for magnification, our sources have intrinsic star formation rates (SFR) ˜ 900-3500 M⊙ yr-1, resulting in a median SFR surface density ΣSFR ˜ 132 M⊙ yr-1 kpc-2 (or ˜218 M⊙ yr-1 kpc-2 for the Gaussian fit). This is consistent with that observed for other star-forming galaxies at similar redshifts, and is significantly below the Eddington limit for a radiation pressure regulated starburst.
Topology of large-scale structure in seeded hot dark matter models
NASA Technical Reports Server (NTRS)
Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.
1992-01-01
The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.
Gaussian content as a laser beam quality parameter.
Ruschin, Shlomo; Yaakobi, Elad; Shekel, Eyal
2011-08-01
We propose the Gaussian content (GC) as an optional quality parameter for the characterization of laser beams. It is defined as the overlap integral of a given field with an optimally defined Gaussian. The definition is especially suited for applications where coherence properties are targeted. Mathematical definitions and basic calculation procedures are given along with results for basic beam profiles. The coherent combination of an array of laser beams and the optimal coupling between a diode laser and a single-mode fiber are elaborated as application examples. The measurement of the GC and its conservation upon propagation are experimentally confirmed.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.
Spinczyk, Dominik; Krasoń, Agata
2018-05-29
Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Non-gaussianity versus nonlinearity of cosmological perturbations.
Verde, L
2001-06-01
Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.
NASA Astrophysics Data System (ADS)
Liu, L.; Neretnieks, I.
Canisters with spent nuclear fuel will be deposited in fractured crystalline rock in the Swedish concept for a final repository. The fractures intersect the canister holes at different angles and they have variable apertures and therefore locally varying flowrates. Our previous model with fractures with a constant aperture and a 90° intersection angle is now extended to arbitrary intersection angles and stochastically variable apertures. It is shown that the previous basic model can be simply amended to account for these effects. More importantly, it has been found that the distributions of the volumetric and the equivalent flow rates are all close to the Normal for both fractal and Gaussian fractures, with the mean of the distribution of the volumetric flow rate being determined solely by the hydraulic aperture, and that of the equivalent flow rate being determined by the mechanical aperture. Moreover, the standard deviation of the volumetric flow rates of the many realizations increases with increasing roughness and spatial correlation length of the aperture field, and so does that of the equivalent flow rates. Thus, two simple statistical relations can be developed to describe the stochastic properties of fluid flow and solute transport through a single fracture with spatially variable apertures. This obviates, then, the need to simulate each fracture that intersects a canister in great detail, and allows the use of complex fractures also in very large fracture network models used in performance assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, P; Chang Gung University, Taoyuan, Taiwan; Huang, H
Purpose: In this study, we present an effective method to derive low dose envelope of the proton in-air spot fluence at beam positions other than the isocenter to reduce amount of measurements required for planning commission. Also, we demonstrate commissioning and validation results of this method to the Eclipse treatment planning system (version 13.0.29) for a Sumitomo dedicated proton line scanning beam nozzle. Methods: The in-air spot profiles at five beam-axis positions (±200, ±100 and 0 mm) were obtained in trigger mode using a MP3 Water tank (PTW-Freiburg) and a pinpoint ionization chamber (model 31014, PTW-Freiburg). Low dose envelope (belowmore » 1% of the center dose) of the spot profile at isocenter was obtained by repeated point measurements to minimize dosimetry uncertainty. The double Gaussian (DG) model was used to fit and obtain optimal σ1, σ2 and their corresponding weightings through our in-house MATLAB (Mathworks) program. σ1, σ2 were assumed to expand linearly along the beam axis from a virtual source position calculated by back projecting fitted sigmas from the single Gaussian (SG) model. Absolute doses in water were validated using an Advanced Markus chamber at the depth of 2cm with Pristine Peak (BP) R90d ranging from 5–32 cm for 10×10 cm2 scanned fields. The field size factors were verified with square fields from 2 to 20 cm at 2cm and before BP depth. Results: The absolute dose outputs were found to be within ±3%. For field size factor, the agreement between calculated and measurement were within ±2% at 2cm and ±3% before BP, except for the field size below 2×2 cm2. Conclusion: The double Gaussian model was found to be sufficient for characterizing the Sumitomo dedicated proton line scanning nozzle. With our effective double Gaussian fitting method, we are able to save significant proton beam time with acceptable output accuracy.« less
Radius of curvature variations for annular, dark hollow and flat topped beams in turbulence
NASA Astrophysics Data System (ADS)
Eyyuboğlu, H. T.; Baykal, Y. K.; Ji, X. L.
2010-06-01
For propagation in turbulent atmosphere, the radius of curvature variations for annular, dark hollow and flat topped beams are examined under a single formulation. Our results show that for collimated beams, when examined against propagation length, the dark hollow, flat topped and annular Gaussian beams behave nearly the same as the Gaussian beam, but have larger radius of curvature values. Increased partial coherence and turbulence levels tend to lower the radius of curvature. Bigger source sizes on the other hand give rise to larger radius of curvature. Dark hollow and flat topped beams have reduced radius of curvature at longer wavelengths, whereas the annular Gaussian beam seems to be unaffected by wavelength changes; the radius of curvature of the Gaussian beam meanwhile rises with increasing wavelength.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.
Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi
2017-05-24
As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is < 50%, CpG obs /CpG exp varies, and the length of CGI ranges from eight nucleotides to a few thousand of nucleotides. It implies that CGI detection is not just a straightly statistical task and some unrevealed rules probably are hidden. A novel Gaussian model, GaussianCpG, is developed for detection of CpG islands on human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
Primordial non-gaussianity from the bispectrum of 21-cm fluctuations in the dark ages
NASA Astrophysics Data System (ADS)
Muñoz, Julian B.; Ali-Haïmoud, Yacine; Kamionkowski, Marc
2015-10-01
A measurement of primordial non-Gaussianity will be of paramount importance to distinguish between different models of inflation. Cosmic microwave background (CMB) anisotropy observations have set unprecedented bounds on the non-Gaussianity parameter fNL but the interesting regime fNL≲1 is beyond their reach. Brightness-temperature fluctuations in the 21-cm line during the dark ages (z ˜30 - 100 ) are a promising successor to CMB studies, giving access to a much larger number of modes. They are, however, intrinsically nonlinear, which results in secondary non-gaussianities orders of magnitude larger than the sought-after primordial signal. In this paper we carefully compute the primary and secondary bispectra of 21-cm fluctuations on small scales. We use the flat-sky formalism, which greatly simplifies the analysis, while still being very accurate on small angular scales. We show that the secondary bispectrum is highly degenerate with the primordial one, and argue that even percent-level uncertainties in the amplitude of the former lead to a bias of order Δ fNL˜10 . To tackle this problem we carry out a detailed Fisher analysis, marginalizing over the amplitudes of a few smooth redshift-dependent coefficients characterizing the secondary bispectrum. We find that the signal-to-noise ratio for a single redshift slice is reduced by a factor of ˜5 in comparison to a case without secondary non-gaussianities. Setting aside foreground contamination, we forecast that a cosmic-variance-limited experiment observing 21-cm fluctuations over 30 ≤z ≤100 with a 0.1-MHz bandwidth and 0.1 arc min angular resolution could achieve a sensitivity of order fNLlocal˜0.03 , fNLequil˜0.04 and fNLortho˜0.03 .
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Fusion and Gaussian mixture based classifiers for SONAR data
NASA Astrophysics Data System (ADS)
Kotari, Vikas; Chang, KC
2011-06-01
Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowie, L.L.; Hu, E.M.
1986-06-01
The velocities of 38 centrally positioned galaxies (r much less than 100 kpc) were measured relative to the velocity of the first-ranked galaxy in 14 rich clusters. Analysis of the velocity distribution function of this sample and of previous data shows that the population cannot be fit by a single Gaussian. An adequate fit is obtained if 60 percent of the objects lie in a Gaussian with sigma = 250 km/s and the remainder in a population with sigma = 1400 km/s. All previous data sets are individually consistent with this conclusion. This suggests that there is a bound populationmore » of galaxies in the potential well of the central galaxy in addition to the normal population of the cluster core. This is taken as supporting evidence for the galactic cannibalism model of cD galaxy formation. 14 references.« less
NASA Technical Reports Server (NTRS)
Cowie, L. L.; Hu, E. M.
1986-01-01
The velocities of 38 centrally positioned galaxies (r much less than 100 kpc) were measured relative to the velocity of the first-ranked galaxy in 14 rich clusters. Analysis of the velocity distribution function of this sample and of previous data shows that the population cannot be fit by a single Gaussian. An adequate fit is obtained if 60 percent of the objects lie in a Gaussian with sigma = 250 km/s and the remainder in a population with sigma = 1400 km/s. All previous data sets are individually consistent with this conclusion. This suggests that there is a bound population of galaxies in the potential well of the central galaxy in addition to the normal population of the cluster core. This is taken as supporting evidence for the galactic cannibalism model of cD galaxy formation.
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui
2015-12-01
This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Wear, Keith A
2002-11-01
For a wide range of applications in medical ultrasound, power spectra of received signals are approximately Gaussian. It has been established previously that an ultrasound beam with a Gaussian spectrum propagating through a medium with linear attenuation remains Gaussian. In this paper, Gaussian transformations are derived to model the effects of scattering (according to a power law, as is commonly applicable in soft tissues, especially over limited frequency ranges) and gating (with a Hamming window, a commonly used gate function). These approximations are shown to be quite accurate even for relatively broad band systems with fractional bandwidths approaching 100%. The theory is validated by experiments in phantoms consisting of glass particles suspended in agar.
Non-Gaussian noise-weakened stability in a foraging colony system with time delay
NASA Astrophysics Data System (ADS)
Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong
2018-02-01
In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.
Comment on "Universal relation between skewness and kurtosis in complex dynamics"
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2015-12-01
In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.
NASA Astrophysics Data System (ADS)
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-07-01
The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.
Effect of beam types on the scintillations: a review
NASA Astrophysics Data System (ADS)
Baykal, Yahya; Eyyuboglu, Halil T.; Cai, Yangjian
2009-02-01
When different incidences are launched in atmospheric turbulence, it is known that the intensity fluctuations exhibit different characteristics. In this paper we review our work done in the evaluations of the scintillation index of general beam types when such optical beams propagate in horizontal atmospheric links in the weak fluctuations regime. Variation of scintillation indices versus the source and medium parameters are examined for flat-topped-Gaussian, cosh- Gaussian, cos-Gaussian, annular, elliptical Gaussian, circular (i.e., stigmatic) and elliptical (i.e., astigmatic) dark hollow, lowest order Bessel-Gaussian and laser array beams. For flat-topped-Gaussian beam, scintillation is larger than the single Gaussian beam scintillation, when the source sizes are much less than the Fresnel zone but becomes smaller for source sizes much larger than the Fresnel zone. Cosh-Gaussian beam has lower on-axis scintillations at smaller source sizes and longer propagation distances as compared to Gaussian beams where focusing imposes more reduction on the cosh- Gaussian beam scintillations than that of the Gaussian beam. Intensity fluctuations of a cos-Gaussian beam show favorable behaviour against a Gaussian beam at lower propagation lengths. At longer propagation lengths, annular beam becomes advantageous. In focused cases, the scintillation index of annular beam is lower than the scintillation index of Gaussian and cos-Gaussian beams starting at earlier propagation distances. Cos-Gaussian beams are advantages at relatively large source sizes while the reverse is valid for annular beams. Scintillations of a stigmatic or astigmatic dark hollow beam can be smaller when compared to stigmatic or astigmatic Gaussian, annular and flat-topped beams under conditions that are closely related to the beam parameters. Intensity fluctuation of an elliptical Gaussian beam can also be smaller than a circular Gaussian beam depending on the propagation length and the ratio of the beam waist size along the long axis to that along the short axis (i.e., astigmatism). Comparing against the fundamental Gaussian beam on equal source size and equal power basis, it is observed that the scintillation index of the lowest order Bessel-Gaussian beam is lower at large source sizes and large width parameters. However, for excessively large width parameters and beyond certain propagation lengths, the advantage of the lowest order Bessel-Gaussian beam seems to be lost. Compared to Gaussian beam, laser array beam exhibits less scintillations at long propagation ranges and at some midrange radial displacement parameters. When compared among themselves, laser array beams tend to have reduced scintillations for larger number of beamlets, longer wavelengths, midrange radial displacement parameters, intermediate Gaussian source sizes, larger inner scales and smaller outer scales of turbulence. The number of beamlets used does not seem to be so effective in this improvement of the scintillations.
The conformal limit of inflation in the era of CMB polarimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pajer, Enrico; Wijck, Jaap V.S. van; Pimentel, Guilherme L., E-mail: enrico.pajer@gmail.com, E-mail: g.leitepimentel@uva.nl, E-mail: j.v.s.vanwijck@uu.nl
2017-06-01
We argue that the non-detection of primordial tensor modes has taught us a great deal about the primordial universe. In single-field slow-roll inflation, the current upper bound on the tensor-to-scalar ratio, r <0.07 (95% CL), implies that the Hubble slow-roll parameters obey ε||η , and therefore establishes the existence of a new hierarchy. We dub this regime the conformal limit of (slow-roll) inflation, and show that it includes Starobinsky-like inflation as well as all viable single-field models with a sub-Planckian field excursion. In this limit, all primordial correlators are constrained by the full conformal group to leading non-trivial order inmore » slow-roll. This fixes the power spectrum and the full bispectrum, and leads to the ''conformal'' shape of non-Gaussianity. The size of non-Gaussianity is related to the running of the spectral index by a consistency condition, and therefore it is expected to be small. In passing, we clarify the role of boundary terms in the ζ action, the order to which constraint equations need to be solved, and re-derive our results using the Wheeler-deWitt formalism.« less
NASA Astrophysics Data System (ADS)
Lylova, A. N.; Sheldakova, Yu. V.; Kudryashov, A. V.; Samarkin, V. V.
2018-01-01
We consider the methods for modelling doughnut and super-Gaussian intensity distributions in the far field by means of deformable bimorph mirrors. A method for the rapid formation of a specified intensity distribution using a Shack - Hartmann sensor is proposed, and the results of the modelling of doughnut and super-Gaussian intensity distributions are presented.
Variational study of fermionic and bosonic systems with non-Gaussian states: Theory and applications
NASA Astrophysics Data System (ADS)
Shi, Tao; Demler, Eugene; Ignacio Cirac, J.
2018-03-01
We present a new variational method for investigating the ground state and out of equilibrium dynamics of quantum many-body bosonic and fermionic systems. Our approach is based on constructing variational wavefunctions which extend Gaussian states by including generalized canonical transformations between the fields. The key advantage of such states compared to simple Gaussian states is presence of non-factorizable correlations and the possibility of describing states with strong entanglement between particles. In contrast to the commonly used canonical transformations, such as the polaron or Lang-Firsov transformations, we allow parameters of the transformations to be time dependent, which extends their regions of applicability. We derive equations of motion for the parameters characterizing the states both in real and imaginary time using the differential structure of the variational manifold. The ground state can be found by following the imaginary time evolution until it converges to a steady state. Collective excitations in the system can be obtained by linearizing the real-time equations of motion in the vicinity of the imaginary time steady-state solution. Our formalism allows us not only to determine the energy spectrum of quasiparticles and their lifetime, but to obtain the complete spectral functions and to explore far out of equilibrium dynamics such as coherent evolution following a quantum quench. We illustrate and benchmark this framework with several examples: a single polaron in the Holstein and Su-Schrieffer-Heeger models, non-equilibrium dynamics in the spin-boson and Kondo models, the superconducting to charge density wave phase transitions in the Holstein model.
Schlomann, Brandon H
2018-06-06
A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ziegler, G; Ridgway, G R; Dahnke, R; Gaser, C
2014-08-15
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18-94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Ziegler, G.; Ridgway, G.R.; Dahnke, R.; Gaser, C.
2014-01-01
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18–94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. PMID:24742919
Single-photon nonlinearities in the propagation of focused beams through dense atomic clouds
NASA Astrophysics Data System (ADS)
Wang, Yidan; Gorshkov, Alexey; Gullans, Michael
2017-04-01
We theoretically study single-photon nonlinearities realized when a highly focused Gaussian beam passes through a dense atomic cloud. In this system, strong dipole-dipole interactions arise between closely spaced atoms and significantly affect light propagation. We find that the highly focused Gaussian beam can be treated as an effective one-dimensional waveguide, which simplifies the calculation of photon transmission and correlation functions. The formalism we develop is also applicable to the case where additional atom-atom interactions, such as interactions between Rydberg atoms, are involved. This work was supported by the ARL, NSF PFC at the JQI, AFOSR, NSF PIF, ARO, and AFOSR MURI.
Versatile Gaussian probes for squeezing estimation
NASA Astrophysics Data System (ADS)
Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo
2017-05-01
We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
We report that the Planck full mission cosmic microwave background (CMB) temperature and E-mode polarization maps are analysed to obtain constraints on primordial non-Gaussianity (NG). Using three classes of optimal bispectrum estimators – separable template-fitting (KSW), binned, and modal – we obtain consistent values for the primordial local, equilateral, and orthogonal bispectrum amplitudes, quoting as our final result from temperature alone ƒ local NL = 2.5 ± 5.7, ƒ equil NL= -16 ± 70, , and ƒ ortho NL = -34 ± 32 (68% CL, statistical). Combining temperature and polarization data we obtain ƒ local NL = 0.8 ± 5.0,more » ƒ equil NL= -4 ± 43, and ƒ ortho NL = -26 ± 21 (68% CL, statistical). The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. The effect of time-domain de-glitching systematics on the bispectrum is negligible. In spite of these test outcomes we conservatively label the results including polarization data as preliminary, owing to a known mismatch of the noise model in simulations and the data. Beyond estimates of individual shape amplitudes, we present model-independent, three-dimensional reconstructions of the Planck CMB bispectrum and derive constraints on early universe scenarios that generate primordial NG, including general single-field models of inflation, axion inflation, initial state modifications, models producing parity-violating tensor bispectra, and directionally dependent vector models. We present a wide survey of scale-dependent feature and resonance models, accounting for the “look elsewhere” effect in estimating the statistical significance of features. We also look for isocurvature NG, and find no signal, but we obtain constraints that improve significantly with the inclusion of polarization. The primordial trispectrum amplitude in the local model is constrained to be g local NL = (-0.9 ± 7.7 ) X 10 4(68% CL statistical), and we perform an analysis of trispectrum shapes beyond the local case. The global picture that emerges is one of consistency with the premises of the ΛCDM cosmology, namely that the structure we observe today was sourced by adiabatic, passive, Gaussian, and primordial seed perturbations.« less
Achiral symmetry breaking and positive Gaussian modulus lead to scalloped colloidal membranes
Gibaud, Thomas; Kaplan, C. Nadir; Sharma, Prerna; Zakhary, Mark J.; Ward, Andrew; Oldenbourg, Rudolf; Meyer, Robert B.; Kamien, Randall D.; Powers, Thomas R.; Dogic, Zvonimir
2017-01-01
In the presence of a nonadsorbing polymer, monodisperse rod-like particles assemble into colloidal membranes, which are one-rod-length–thick liquid-like monolayers of aligned rods. Unlike 3D edgeless bilayer vesicles, colloidal monolayer membranes form open structures with an exposed edge, thus presenting an opportunity to study elasticity of fluid sheets. Membranes assembled from single-component chiral rods form flat disks with uniform edge twist. In comparison, membranes composed of a mixture of rods with opposite chiralities can have the edge twist of either handedness. In this limit, disk-shaped membranes become unstable, instead forming structures with scalloped edges, where two adjacent lobes with opposite handedness are separated by a cusp-shaped point defect. Such membranes adopt a 3D configuration, with cusp defects alternatively located above and below the membrane plane. In the achiral regime, the cusp defects have repulsive interactions, but away from this limit we measure effective long-ranged attractive binding. A phenomenological model shows that the increase in the edge energy of scalloped membranes is compensated by concomitant decrease in the deformation energy due to Gaussian curvature associated with scalloped edges, demonstrating that colloidal membranes have positive Gaussian modulus. A simple excluded volume argument predicts the sign and magnitude of the Gaussian curvature modulus that is in agreement with experimental measurements. Our results provide insight into how the interplay between membrane elasticity, geometrical frustration, and achiral symmetry breaking can be used to fold colloidal membranes into 3D shapes. PMID:28411214
Li, Derong; Lv, Xiaohua; Bowlan, Pamela; Du, Rui; Zeng, Shaoqun; Luo, Qingming
2009-09-14
The evolution of the frequency chirp of a laser pulse inside a classical pulse compressor is very different for plane waves and Gaussian beams, although after propagating through the last (4th) dispersive element, the two models give the same results. In this paper, we have analyzed the evolution of the frequency chirp of Gaussian pulses and beams using a method which directly obtains the spectral phase acquired by the compressor. We found the spatiotemporal couplings in the phase to be the fundamental reason for the difference in the frequency chirp acquired by a Gaussian beam and a plane wave. When the Gaussian beam propagates, an additional frequency chirp will be introduced if any spatiotemporal couplings (i.e. angular dispersion, spatial chirp or pulse front tilt) are present. However, if there are no couplings present, the chirp of the Gaussian beam is the same as that of a plane wave. When the Gaussian beam is well collimated, the introduced frequency chirp predicted by the plane wave and Gaussian beam models are in closer agreement. This work improves our understanding of pulse compressors and should be helpful for optimizing dispersion compensation schemes in many applications of femtosecond laser pulses.
On the effective field theory for quasi-single field inflation
NASA Astrophysics Data System (ADS)
Tong, Xi; Wang, Yi; Zhou, Siyi
2017-11-01
We study the effective field theory (EFT) description of the virtual particle effects in quasi-single field inflation, which unifies the previous results on large mass and large mixing cases. By using a horizon crossing approximation and matching with known limits, approximate expressions for the power spectrum and the spectral index are obtained. The error of the approximate solution is within 10% in dominate parts of the parameter space, which corresponds to less-than-0.1% error in the ns-r diagram. The quasi-single field corrections on the ns-r diagram are plotted for a few inflation models. Especially, the quasi-single field correction drives m2phi2 inflation to the best fit region on the ns-r diagram, with an amount of equilateral non-Gaussianity which can be tested in future experiments.
NASA Astrophysics Data System (ADS)
Belov, A. V.; Kurkov, Andrei S.; Chikolini, A. V.
1990-08-01
An offset method is modified to allow an analysis of the distribution of fields in a single-mode fiber waveguide without recourse to the Gaussian approximation. A new approximation for the field is obtained for fiber waveguides with a step refractive index profile and a special analysis employing the Hankel transformation is applied to waveguides with a distributed refractive index. The field distributions determined by this method are compared with the corresponding distributions calculated from the refractive index of a preform from which the fibers are drawn. It is shown that these new approaches can be used to determine the dimensions of a mode spot defined in different ways and to forecast the dispersion characteristics of single-mode fiber waveguides.
Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile
NASA Astrophysics Data System (ADS)
Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.
2012-09-01
Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Parametrization and Optimization of Gaussian Non-Markovian Unravelings for Open Quantum Dynamics
NASA Astrophysics Data System (ADS)
Megier, Nina; Strunz, Walter T.; Viviescas, Carlos; Luoma, Kimmo
2018-04-01
We derive a family of Gaussian non-Markovian stochastic Schrödinger equations for the dynamics of open quantum systems. The different unravelings correspond to different choices of squeezed coherent states, reflecting different measurement schemes on the environment. Consequently, we are able to give a single shot measurement interpretation for the stochastic states and microscopic expressions for the noise correlations of the Gaussian process. By construction, the reduced dynamics of the open system does not depend on the squeezing parameters. They determine the non-Hermitian Gaussian correlation, a wide range of which are compatible with the Markov limit. We demonstrate the versatility of our results for quantum information tasks in the non-Markovian regime. In particular, by optimizing the squeezing parameters, we can tailor unravelings for improving entanglement bounds or for environment-assisted entanglement protection.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
Easy-interactive and quick psoriasis lesion segmentation
NASA Astrophysics Data System (ADS)
Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang
2013-12-01
This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Improved Gaussian Beam-Scattering Algorithm
NASA Technical Reports Server (NTRS)
Lock, James A.
1995-01-01
The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Evans, Philip G.; Grice, Warren P.
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
Qi, Bing; Evans, Philip G.; Grice, Warren P.
2018-01-01
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L
2005-01-01
The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wulff, J; Huggins, A
Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
Mazumdar, Anupam; Nadathur, Seshadri
2012-03-16
We provide a model in which both the inflaton and the curvaton are obtained from within the minimal supersymmetric standard model, with known gauge and Yukawa interactions. Since now both the inflaton and curvaton fields are successfully embedded within the same sector, their decay products thermalize very quickly before the electroweak scale. This results in two important features of the model: first, there will be no residual isocurvature perturbations, and second, observable non-Gaussianities can be generated with the non-Gaussianity parameter f(NL)~O(5-1000) being determined solely by the combination of weak-scale physics and the standard model Yukawa interactions.
Lin, Chuan-Kai; Wang, Sheng-De
2004-11-01
A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R
2017-05-01
To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Analysis of low altitude atmospheric turbulence data measured in flight
NASA Technical Reports Server (NTRS)
Ganzer, V. M.; Joppa, R. G.; Vanderwees, G.
1977-01-01
All three components of turbulence were measured simultaneously in flight at each wing tip of a Beech D-18 aircraft. The flights were conducted at low altitude, 30.5 - 61.0 meters (100-200 ft.), over water in the presence of wind driven turbulence. Statistical properties of flight measured turbulence were compared with Gaussian and non-Gaussian turbulence models. Spatial characteristics of the turbulence were analyzed using the data from flight perpendicular and parallel to the wind. The probability density distributions of the vertical gusts show distinctly non-Gaussian characteristics. The distributions of the longitudinal and lateral gusts are generally Gaussian. The power spectra compare in the inertial subrange at some points better with the Dryden spectrum, while at other points the von Karman spectrum is a better approximation. In the low frequency range the data show peaks or dips in the power spectral density. The cross between vertical gusts in the direction of the mean wind were compared with a matched non-Gaussian model. The real component of the cross spectrum is in general close to the non-Gaussian model. The imaginary component, however, indicated a larger phase shift between these two gust components than was found in previous research.
Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.; ...
2017-09-22
Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.
Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Capacity of PPM on Gaussian and Webb Channels
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.; Hamkins, J.
2000-01-01
This paper computes and compares the capacities of M-ary PPM on various idealized channels that approximate the optical communication channel: (1) the standard additive white Gaussian noise (AWGN) channel;(2) a more general AWGN channel (AWGN2) allowing different variances in signal and noise slots;(3) a Webb-distributed channel (Webb2);(4) a Webb+Gaussian channel, modeling Gaussian thermal noise added to Webb-distributed channel outputs.
NIR spectroscopic measurement of moisture content in Scots pine seeds.
Lestander, Torbjörn A; Geladi, Paul
2003-04-01
When tree seeds are used for seedling production it is important that they are of high quality in order to be viable. One of the factors influencing viability is moisture content and an ideal quality control system should be able to measure this factor quickly for each seed. Seed moisture content within the range 3-34% was determined by near-infrared (NIR) spectroscopy on Scots pine (Pinus sylvestris L.) single seeds and on bulk seed samples consisting of 40-50 seeds. The models for predicting water content from the spectra were made by partial least squares (PLS) and ordinary least squares (OLS) regression. Different conditions were simulated involving both using less wavelengths and going from samples to single seeds. Reflectance and transmission measurements were used. Different spectral pretreatment methods were tested on the spectra. Including bias, the lowest prediction errors for PLS models based on reflectance within 780-2280 nm from bulk samples and single seeds were 0.8% and 1.9%, respectively. Reduction of the single seed reflectance spectrum to 850-1048 nm gave higher biases and prediction errors in the test set. In transmission (850-1048 nm) the prediction error was 2.7% for single seeds. OLS models based on simulated 4-sensor single seed system consisting of optical filters with Gaussian transmission indicated more than 3.4% error in prediction. A practical F-test based on test sets to differentiate models is introduced.
Reverse engineering gene regulatory networks from measurement with missing values.
Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong
2016-12-01
Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.
Extracting Primordial Non-Gaussianity from Large Scale Structure in the Post-Planck Era
NASA Astrophysics Data System (ADS)
Dore, Olivier
Astronomical observations have become a unique tool to probe fundamental physics. Cosmology, in particular, emerged as a data-driven science whose phenomenological modeling has achieved great success: in the post-Planck era, key cosmological parameters are measured to percent precision. A single model reproduces a wealth of astronomical observations involving very distinct physical processes at different times. This success leads to fundamental physical questions. One of the most salient is the origin of the primordial perturbations that grew to form the large-scale structures we now observe. More and more cosmological observables point to inflationary physics as the origin of the structure observed in the universe. Inflationary physics predict the statistical properties of the primordial perturbations and it is thought to be slightly non-Gaussian. The detection of this small deviation from Gaussianity represents the next frontier in early Universe physics. To measure it would provide direct, unique and quantitative insights about the physics at play when the Universe was only a fraction of a second old, thus probing energies untouchable otherwise. En par with the well-known relic gravitational wave radiation -- the famous ``B-modes'' -- it is one the few probes of inflation. This departure from Gaussianity leads to very specific signature in the large scale clustering of galaxies. Observing large-scale structure, we can thus establish a direct connection with fundamental theories of the early universe. In the post-Planck era, large-scale structures are our most promising pathway to measuring this primordial signal. Current estimates suggests that the next generation of space or ground based large scale structure surveys (e.g. the ESA EUCLID or NASA WFIRST missions) might enable a detection of this signal. This potential huge payoff requires us to solidify the theoretical predictions supporting these measurements. Even if the exact signal we are looking for is of unknown amplitude, it is obvious that we must measure it as well as these ground breaking data set will permit. We propose to develop the supporting theoretical work to the point where the complete non-gaussianian signature can be extracted from these data sets. We will do so by developing three complementary directions: - We will develop the appropriate formalism to measure and model galaxy clustering on the largest scales. - We will study the impact of non-Gaussianity on higher-order statistics, the most promising statistics for our purpose.. - We will explicit the connection between these observables and the microphysics of a large class of inflation models, but also identify fundamental limitations to this interpretation.
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Theoretical investigation of gas-surface interactions
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1990-01-01
A Dirac-Hartree-Fock code was developed for polyatomic molecules. The program uses integrals over symmetry-adapted real spherical harmonic Gaussian basis functions generated by a modification of the MOLECULE integrals program. A single Gaussian function is used for the nuclear charge distribution, to ensure proper boundary conditions at the nuclei. The Gaussian primitive functions are chosen to satisfy the kinetic balance condition. However, contracted functions which do not necessarily satisfy this condition may be used. The Fock matrix is constructed in the scalar basis and transformed to a jj-coupled 2-spinor basis before diagonalization. The program was tested against numerical results for atoms with a Gaussian nucleus and diatomic molecules with point nuclei. The energies converge on the numerical values as the basis set size is increased. Full use of molecular symmetry (restricted to D sub 2h and subgroups) is yet to be implemented.
Infrared images target detection based on background modeling in the discrete cosine domain
NASA Astrophysics Data System (ADS)
Ye, Han; Pei, Jihong
2018-02-01
Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.
Interactive Gaussian Graphical Models for Discovering Depth Trends in ChemCam Data
NASA Astrophysics Data System (ADS)
Oyen, D. A.; Komurlu, C.; Lanza, N. L.
2018-04-01
Interactive Gaussian graphical models discover surface compositional features on rocks in ChemCam targets. Our approach visualizes shot-to-shot relationships among LIBS observations, and identifies the wavelengths involved in the trend.
Weak constrained localized ensemble transform Kalman filter for radar data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana; Lange, Heiner
2015-04-01
The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Vilnrotter, V.
1996-01-01
A closed-form expression for the capacity of an array of correlated Gaussian channels is derived. It is shown that when signal and noise are independent, the array of observables can be replaced with a single observable without diminishing the capacity of the array channel. Examples are provided to illustrate the dependence of channel capacity on noise correlation for two- and three-channel arrays.
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Vilnrotter, V.
1996-01-01
A closed-form expression for the capacity of an array of correlated Gaussian channels is derived. It is shown that when signal and noise are independent, the array of observables can be replaced with a single observable without diminishing the capacity of the array channel. Examples are provided to illustrate the dependence of channel capacity on noise correlation for two- and three-channel arrays.
High-lying single-particle modes, chaos, correlational entropy, and doubling phase transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoyanov, Chavdar; Zelevinsky, Vladimir
Highly excited single-particle states in nuclei are coupled with the excitations of a more complex character, first of all with collective phononlike modes of the core. In the framework of the quasiparticle-phonon model, we consider the structure of resulting complex configurations, using the 1k{sub 17/2} orbital in {sup 209}Pb as an example. Although, on the level of one- and two-phonon admixtures, the fully chaotic Gaussian orthogonal ensemble regime is not reached, the eigenstates of the model carry a significant degree of complexity that can be quantified with the aid of correlational invariant entropy. With artificially enhanced particle-core coupling, the systemmore » undergoes the doubling phase transition with the quasiparticle strength concentrated in two repelling peaks. This phase transition is clearly detected by correlational entropy.« less
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.
2016-12-01
Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.
High power infrared super-Gaussian beams: generation, propagation, and application
NASA Astrophysics Data System (ADS)
du Preez, Neil C.; Forbes, Andrew; Botha, Lourens R.
2008-10-01
In this paper we present the design of a CO2 laser resonator that produces as the stable transverse mode a super-Gaussian laser beam. The resonator makes use of an intra-cavity diffractive mirror and a flat output coupler, generating the desired intensity profile at the output coupler with a flat wavefront. We consider the modal build-up in such a resonator and show that such a resonator mode has the ability to extract more energy from the cavity that a standard cavity single mode beam (e.g., Gaussian mode cavity). We demonstrate the design experimentally on a high average power TEA CO2 laser for paint stripping applications.
Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.
Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R
2013-11-01
A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.
Experimental study of the focusing properties of a Gaussian Schell-model vortex beam
NASA Astrophysics Data System (ADS)
Wang, Fei; Zhu, Shijun; Cai, Yangjian
2011-08-01
We carry out an experimental and theoretical study of the focusing properties of a Gaussian Schell-model (GSM) vortex beam. It is found that we can shape the beam profile of the focused GSM vortex beam by varying its initial spatial coherence width. Focused dark hollow, flat-topped, and Gaussian beam spots can be obtained in our experiment, which will be useful for trapping particles. The experimental results agree well with the theoretical results.
Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund
2016-01-01
According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.
The analysis of ensembles of moderately saturated interstellar lines
NASA Technical Reports Server (NTRS)
Jenkins, E. B.
1986-01-01
It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.
Structured Laguerre-Gaussian beams for mitigation of spherical aberration in tightly focused regimes
NASA Astrophysics Data System (ADS)
Haddadi, S.; Bouzid, O.; Fromager, M.; Hasnaoui, A.; Harfouche, A.; Cagniot, E.; Forbes, A.; Aït-Ameur, K.
2018-04-01
Many laser applications utilise a focused laser beam having a single-lobed intensity profile in the focal plane, ideally with the highest possible on-axis intensity. Conventionally, this is achieved with the lowest-order Laguerre-Gaussian mode (LG00), the Gaussian beam, in a tight focusing configuration. However, tight focusing often involves significant spherical aberration due to the high numerical aperture of the systems involved, thus degrading the focal quality. Here, we demonstrate that a high-order radial LG p0 mode can be tailored to meet and in some instances exceed the performance of the Gaussian. We achieve this by phase rectification of the mode using a simple binary diffractive optic. By way of example, we show that the focusing of a rectified LG50 beam is almost insensitive to a spherical aberration coefficient of over three wavelengths, in contrast with the usual Gaussian beam for which the intensity of the focal spot is reduced by a factor of two. This work paves the way towards enhanced focal spots using structured light.
Casas, F J; Pascual, J P; de la Fuente, M L; Artal, E; Portilla, J
2010-07-01
This paper describes a comparative nonlinear analysis of low-noise amplifiers (LNAs) under different stimuli for use in astronomical applications. Wide-band Gaussian-noise input signals, together with the high values of gain required, make that figures of merit, such as the 1 dB compression (1 dBc) point of amplifiers, become crucial in the design process of radiometric receivers in order to guarantee the linearity in their nominal operation. The typical method to obtain the 1 dBc point is by using single-tone excitation signals to get the nonlinear amplitude to amplitude (AM-AM) characteristic but, as will be shown in the paper, in radiometers, the nature of the wide-band Gaussian-noise excitation signals makes the amplifiers present higher nonlinearity than when using single tone excitation signals. Therefore, in order to analyze the suitability of the LNA's nominal operation, the 1 dBc point has to be obtained, but using realistic excitation signals. In this work, an analytical study of compression effects in amplifiers due to excitation signals composed of several tones is reported. Moreover, LNA nonlinear characteristics, as AM-AM, total distortion, and power to distortion ratio, have been obtained by simulation and measurement with wide-band Gaussian-noise excitation signals. This kind of signal can be considered as a limit case of a multitone signal, when the number of tones is very high. The work is illustrated by means of the extraction of realistic nonlinear characteristics, through simulation and measurement, of a 31 GHz back-end module LNA used in the radiometer of the QUIJOTE (Q U I JOint TEnerife) CMB experiment.
Sparse covariance estimation in heterogeneous samples*
Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian
2015-01-01
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189
Abbott, Lauren J; Stevens, Mark J
2015-12-28
A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.
Current Status and Challenges of Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Gelaro, R.
2016-12-01
The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.
Differentiating G-inflation from string gas cosmology using the effective field theory approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Minxi; Liu, Junyu; Lu, Shiyun
A characteristic signature of String Gas Cosmology is primordial power spectra for scalar and tensor modes which are almost scale-invariant but with a red tilt for scalar modes but a blue tilt for tensor modes. This feature, however, can also be realized in the so-called G-inflation model, in which Horndeski operators are introduced which leads to a blue tensor tilt by softly breaking the Null Energy Condition. In this article we search for potential observational differences between these two cosmologies by performing detailed perturbation analyses based on the Effective Field Theory approach. Our results show that, although both two modelsmore » produce blue tilted tensor perturbations, they behave differently in three aspects. Firstly, String Gas Cosmology predicts a specific consistency relation between the index of the scalar modes n {sub s} and that of tensor ones n {sub t} , which is hard to be reproduced by G-inflation. Secondly, String Gas Cosmology typically predicts non-Gaussianities which are highly suppressed on observable scales, while G-inflation gives rise to observationally large non-Gaussianities because the kinetic terms in the action become important during inflation. However, after finely tuning the model parameters of G-inflation it is possible to obtain a blue tensor spectrum and negligible non-Gaussianities with a degeneracy between the two models. This degeneracy can be broken by a third observable, namely the scale dependence of the nonlinearity parameter, which vanishes for G-inflation but has a blue tilt in the case of String Gas Cosmology. Therefore, we conclude that String Gas Cosmology is in principle observationally distinguishable from the single field inflationary cosmology, even allowing for modifications such as G-inflation.« less
Combining cluster number counts and galaxy clustering
NASA Astrophysics Data System (ADS)
Lacasa, Fabien; Rosenfeld, Rogerio
2016-08-01
The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.
NASA Astrophysics Data System (ADS)
Han, Minah; Jang, Hanjoo; Baek, Jongduk
2018-03-01
We investigate lesion detectability and its trends for different noise structures in single-slice and multislice CBCT images with anatomical background noise. Anatomical background noise is modeled using a power law spectrum of breast anatomy. Spherical signal with a 2 mm diameter is used for modeling a lesion. CT projection data are acquired by the forward projection and reconstructed by the Feldkamp-Davis-Kress algorithm. To generate different noise structures, two types of reconstruction filters (Hanning and Ram-Lak weighted ramp filters) are used in the reconstruction, and the transverse and longitudinal planes of reconstructed volume are used for detectability evaluation. To evaluate single-slice images, the central slice, which contains the maximum signal energy, is used. To evaluate multislice images, central nine slices are used. Detectability is evaluated using human and model observer studies. For model observer, channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels are used. For all noise structures, detectability by a human observer is higher for multislice images than single-slice images, and the degree of detectability increase in multislice images depends on the noise structure. Variation in detectability for different noise structures is reduced in multislice images, but detectability trends are not much different between single-slice and multislice images. The CHO with D-DOG channels predicts detectability by a human observer well for both single-slice and multislice images.
On the numbers of images of two stochastic gravitational lensing models
NASA Astrophysics Data System (ADS)
Wei, Ang
2017-02-01
We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.
NASA Astrophysics Data System (ADS)
Libera, A.; de Barros, F.; Riva, M.; Guadagnini, A.
2016-12-01
Managing contaminated groundwater systems is an arduous task for multiple reasons. First, subsurface hydraulic properties are heterogeneous and the high costs associated with site characterization leads to data scarcity (therefore, model predictions are uncertain). Second, it is common for water agencies to schedule groundwater extraction through a temporal sequence of pumping rates to maximize the benefits to anthropogenic activities and minimize the environmental footprint of the withdrawal operations. The temporal variability in pumping rates and aquifer heterogeneity affect dilution rates of contaminant plumes and chemical concentration breakthrough curves (BTCs) at the well. While contaminant transport under steady-state pumping is widely studied, the manner in which a given time-varying pumping schedule affects contaminant plume behavior is tackled only marginally. At the same time, most studies focus on the impact of Gaussian random hydraulic conductivity (K) fields on transport. Here, we systematically analyze the significance of the random space function (RSF) model characterizing K in the presence of distinct pumping operations on the uncertainty of the concentration BTC at the operating well. We juxtapose Monte Carlo based numerical results associated with two models: (a) a recently proposed Generalized Sub-Gaussian model which allows capturing non-Gaussian statistical scaling features of RSFs such as hydraulic conductivity, and (b) the commonly used Gaussian field approximation. Our novel results include an appraisal of the coupled effect of (a) the model employed to depict the random spatial variability of K and (b) transient flow regime, as induced by a temporally varying pumping schedule, on the concentration BTC at the operating well. We systematically quantify the sensitivity of the uncertainty in the contaminant BTC to the RSF model adopted for K (non-Gaussian or Gaussian) in the presence of diverse well pumping schedules. Results contribute to determine conditions under which any of these two key factors prevails on the other.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Hui
2018-05-01
Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
Identifying stochastic oscillations in single-cell live imaging time series using Gaussian processes
Manning, Cerys; Rattray, Magnus
2017-01-01
Multiple biological processes are driven by oscillatory gene expression at different time scales. Pulsatile dynamics are thought to be widespread, and single-cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in expression dynamics, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. This underlies a new challenge to the experimentalist because neither intuition nor pre-existing methods work well for identifying oscillatory activity in noisy biological time series. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here, we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of non-parametric regression with Gaussian processes. Our method can distinguish oscillatory gene expression from random fluctuations of non-oscillatory expression in single-cell time series, despite peak-to-peak variability in period and amplitude of single-cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model and in experimental data. Analysis of bioluminescent live-cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MoMuLV 5’ LTR (MMLV) promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package. PMID:28493880
Experimental study of the focusing properties of a Gaussian Schell-model vortex beam.
Wang, Fei; Zhu, Shijun; Cai, Yangjian
2011-08-15
We carry out an experimental and theoretical study of the focusing properties of a Gaussian Schell-model (GSM) vortex beam. It is found that we can shape the beam profile of the focused GSM vortex beam by varying its initial spatial coherence width. Focused dark hollow, flat-topped, and Gaussian beam spots can be obtained in our experiment, which will be useful for trapping particles. The experimental results agree well with the theoretical results. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula
2018-01-01
The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
Receiver deghosting in the t-x domain based on super-Gaussianity
NASA Astrophysics Data System (ADS)
Lu, Wenkai; Xu, Ziqiang; Fang, Zhongyu; Wang, Ruiliang; Yan, Chengzhi
2017-01-01
Deghosting methods in the time-space (t-x) domain have attracted a lot of attention because of their flexibility for various source/receiver configurations. Based on the well-known knowledge that the seismic signal has a super-Gaussian distribution, we present a Super-Gaussianity based Receiver Deghosting (SRD) method in the t-x domain. In our method, we denote the upgoing wave and its ghost (downgoing wave) as a single seismic signal, and express the relationship between the upgoing wave and its ghost using two ghost parameters: the sea surface reflection coefficient and the time-shift between the upgoing wave and its ghost. For a single seismic signal, we estimate these two parameters by maximizing the super-Gaussianity of the deghosted output, which is achieved by a 2D grid search method using an adaptively predefined discrete solution space. Since usually a large number of seismic signals are mixed together in a seismic trace, in the proposed method we divide the seismic trace into overlapping frames using a sliding time window with a step of one time sample, and consider each frame as a replacement for a single seismic signal. For a 2D seismic gather, we obtain two 2D maps of the ghost parameters. By assuming that these two parameters vary slowly in the t-x domain, we apply a 2D average filter to these maps, to improve their reliability further. Finally, these deghosted outputs are merged to form the final deghosted result. To demonstrate the flexibility of the proposed method for arbitrary variable depths of the receivers, we apply it to several synthetic and field seismic datasets acquired by variable depth streamer.
Predictions for Proteins, RNAs and DNAs with the Gaussian Dielectric Function Using DelPhiPKa
Wang, Lin; Li, Lin; Alexov, Emil
2015-01-01
We developed a Poisson-Boltzmann based approach to calculate the PKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating PKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large PKa shifts of various single point mutations in staphylococcal nuclease (SNase) from PKa -cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa’s. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves PKa predictions for buried carboxyl residues. Finally, the PKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. PMID:26408449
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Non-Gaussian microwave background fluctuations from nonlinear gravitational effects
NASA Technical Reports Server (NTRS)
Salopek, D. S.; Kunstatter, G. (Editor)
1991-01-01
Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-07-01
What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.
Analytical solutions for avalanche-breakdown voltages of single-diffused Gaussian junctions
NASA Astrophysics Data System (ADS)
Shenai, K.; Lin, H. C.
1983-03-01
Closed-form solutions of the potential difference between the two edges of the depletion layer of a single diffused Gaussian p-n junction are obtained by integrating Poisson's equation and equating the magnitudes of the positive and negative charges in the depletion layer. By using the closed form solution of the static Poisson's equation and Fulop's average ionization coefficient, the ionization integral in the depletion layer is computed, which yields the correct values of avalanche breakdown voltage, depletion layer thickness at breakdown, and the peak electric field as a function of junction depth. Newton's method is used for rapid convergence. A flowchart to perform the calculations with a programmable hand-held calculator, such as the TI-59, is shown.
A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.
Li, Bing; Cui, Wei; Wang, Bin
2015-09-16
Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.
NASA Astrophysics Data System (ADS)
Kervella, P.; Mérand, A.; Perrin, G.; Coudé du Foresto, V.
2006-03-01
We present the results of long-baseline interferometric observations of the bright southern Cepheid ℓ Carinae in the infrared N (8-13 μm) and K (2.0-2.4 μm) bands, using the MIDI and VINCI instruments of the VLT Interferometer. We resolve in the N band a large circumstellar envelope (CSE) that we model with a Gaussian of 3 Rstar (≈500 R⊙ ≈ 2-3 AU) half width at half maximum. The signature of this envelope is also detected in our K band data as a deviation from a single limb darkened disk visibility function. The superimposition of a Gaussian CSE on the limb darkened disk model of the Cepheid star results in a significantly better fit of our VINCI data. The extracted CSE parameters in the K band are a half width at half maximum of 2 Rstar, comparable to the N band model, and a total brightness of 4% of the stellar photosphere. A possibility is that this CSE is linked to the relatively large mass loss rate of ℓ Car. Though its physical nature cannot be determined from our data, we discuss an analogy with the molecular envelopes of RV Tauri, red supergiants and Miras.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Future constraints on angle-dependent non-Gaussianity from large radio surveys
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Shiraishi, Maresuke; Bartolo, Nicola; Bertacca, Daniele; Liguori, Michele; Matarrese, Sabino; Norris, Ray P.; Parkinson, David
2017-03-01
We investigate how well future large-scale radio surveys could measure different shapes of primordial non-Gaussianity; in particular we focus on angle-dependent non-Gaussianity arising from primordial anisotropic sources, whose bispectrum has an angle dependence between the three wavevectors that is characterized by Legendre polynomials PL and expansion coefficients cL. We provide forecasts for measurements of galaxy power spectrum, finding that Large-Scale Structure (LSS) data could allow measurements of primordial non-Gaussianity that would be competitive with, or improve upon, current constraints set by CMB experiments, for all the shapes considered. We argue that the best constraints will come from the possibility to assign redshift information to radio galaxy surveys, and investigate a few possible scenarios for the EMU and SKA surveys. A realistic (futuristic) modeling could provide constraints of fNLloc ≈ 1(0 . 5) for the local shape, fNL of O(10) (O(1)) for the orthogonal, equilateral and folded shapes, and cL=1 ≈ 80(2) , cL=2 ≈ 400(10) for angle-dependent non-Gaussianity showing that only futuristic galaxy surveys will be able to set strong constraints on these models. Nevertheless, the more futuristic forecasts show the potential of LSS analyses to considerably improve current constraints on non-Gaussianity, and so on models of the primordial Universe. Finally, we find the minimum requirements that would be needed to reach σ(cL=1) = 10, which can be considered as a typical (lower) value predicted by some (inflationary) models.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, R.; Dickerson, M.A.; Peterson, K.R.
Two numerical models for the calculation of air concentration and ground deposition of airborne effluent releases are compared. The Particle-in-Cell (PIC) model and the Straight-Line Airflow Gaussian model were used for the simulation. Two sites were selected for comparison: the Hudson River Valley, New York, and the area around the Savannah River Plant, South Carolina. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions withmore » better agreement between the two methods during neutral to unstable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model.« less
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Bayesian spatial transformation models with applications in neuroimaging data
Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.
2013-01-01
Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143
Gaussian Finite Element Method for Description of Underwater Sound Diffraction
NASA Astrophysics Data System (ADS)
Huang, Dehua
A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.
Chinnaraja, D; Rajalakshmi, R; Srinivasan, T; Velmurugan, D; Jayabharathi, J
2014-04-24
A series of biologically active N-thiocarbamoyl pyrazoline derivatives have been synthesized using anhydrous potassium carbonate as the catalyst. All the synthesized compounds were characterized by FT-IR, (1)H NMR, (13)C NMR spectral studies, LCMS, CHN Analysis and X-ray diffraction analysis (compound 7). In order to supplement the XRD parameters, molecular modelling was carried out by Gaussian 03W. From the optimized structure, the energy, dipolemoment and HOMO-LUMO energies of all the systems were calculated. Copyright © 2014 Elsevier B.V. All rights reserved.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
NASA Astrophysics Data System (ADS)
Abdullah, M.; Krishnan, Ganesan; Saliman, Tiffany; Fakaruddin Sidi Ahmad, M.; Bidin, Noriah
2018-03-01
A mirrorless refractometer was studied and analyzed using the quasi-Gaussian beam approach. The Fresnel equation for reflectivity at the interface between two mediums with different refractive indices was used to calculate the directional reflectivity, R. Various liquid samples from 1.3325 to 1.4657 refractive indices units were used. Experimentally, a fiber bundle probe with a concentric configuration of 16 receiving fibers and a single transmitting fiber was employed to verify the developed models. The sensor performance in term of sensitivity, linear range, and resolution, were analyzed and calculated. It has been shown that the developed theoretical models are capable of providing quantitative guidance of the output of the sensor with high accuracy. The highest resolution of the sensor was 4.39 × 10-3 refractive indices units, obtained by correlating the peak voltage along the refractive index. The resolution is sufficient for determining the specific refractive index increment of most polymer solutions, certain proteins, and also in monitoring bacterial growth. The accuracy, simplicity, and effectiveness of the proposed sensor over a long period of time while having non-contact measurements reflect a good potential for commercialization.
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Vainstein, Mendeli H.; Lamb, Luis C.; Prado, Sandra D.
2013-03-01
We propose a novel probabilistic model that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential (ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a team future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileirão) if the starting potential is the same for all teams. Other leagues such as the Italian (Calcio) and the Spanish (La Liga) tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model with simple initial conditions. However, we show that by setting the initial abilities based on data from previous tournaments, our model is able to capture the stylized statistical features of double round robin system (DRRS) tournaments in general. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: here several teams have been crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserve the Gaussian traces during the tournament. On the other hand, in the Italian and Spanish cases, only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brazilian tournament “Brasileirão” cannot be reproduced. This shows that the evolutionary aspects are not superfluous and play an important role which must be considered in other alternative models. Finally, we analyze the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.
Weakly anomalous diffusion with non-Gaussian propagators
NASA Astrophysics Data System (ADS)
Cressoni, J. C.; Viswanathan, G. M.; Ferreira, A. S.; da Silva, M. A. A.
2012-08-01
A poorly understood phenomenon seen in complex systems is diffusion characterized by Hurst exponent H≈1/2 but with non-Gaussian statistics. Motivated by such empirical findings, we report an exact analytical solution for a non-Markovian random walk model that gives rise to weakly anomalous diffusion with H=1/2 but with a non-Gaussian propagator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Yuli; Zou Xubo; Guo Guangcan
We investigate the economical Gaussian cloning of coherent states with the known phase, which produces M copies from N input replica and can be implemented with degenerate parametric amplifiers and beam splitters.The achievable fidelity of single copy is given by 2M{radical}(N)/[{radical}(N)(M-1)+{radical}((1+N)(M{sup 2}+N))], which is bigger than the optimal fidelity of the universal Gaussian cloning. The cloning machine presented here works without ancillary optical modes and can be regarded as the continuous variable generalization of the economical cloning machine for qudits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahan, Luhluh K., E-mail: luhluhjahan@gmail.com; Chatterjee, Ashok
2016-05-23
The temperature and size dependence of the ground-state energy of a polaron in a Gaussian quantum dot have been investigated by using a variational technique. It is found that the ground-state energy increases with increasing temperature and decreases with the size of the quantum dot. Also, it is found that the ground-state energy is larger for a three-dimensional quantum dot as compared to a two-dimensional dot.
Preliminary Examination of Pulse Shapes From GLAS Ocean Returns
NASA Astrophysics Data System (ADS)
Swift, T. P.; Minster, B.
2003-12-01
We have examined GLAS data collected over the Pacific ocean during the commission phase of the ICESat mission, in an area where sea state is well documented. The data used for this preliminary analysis were acquired during two passes along track 95, on March 18 and 26 of 2003, along the stretch offshore southern California. These dates were chosen for their lack of cloud cover; large (4.0 m) and small (0.7 m) significant wave heights, respectively; and the presence of waves emanating from single distant Pacific storms. Cloud cover may be investigated using MODIS images (http://acdisx.gsfc.nasa.gov/data/dataset/MODIS/), while models of significant wave heights and wave vectors for offshore California are archived by the Coastal Data Information Program (http://cdip.ucsd.edu/cdip_htmls/models.shtml). We find that the shape of deep-ocean GLAS pulse returns is diagnostic of the state of the ocean surface. A calm surface produces near-Gaussian, single-peaked shot returns. In contrast, a rough surface produces blurred shot returns which often feature multiple peaks; these peaks are typically separated by total path lengths on the order of one meter. Gaussian curves fit to rough-water returns are therefore less reliable and lead to greater measurement error; outliers in the ocean surface elevation product are mostly the result of poorly fit low-energy shot returns. Additionally, beat patterns and aliasing artifacts may arise from the sampling of deep-ocean wave trains by GLAS footprints separated by 140m. The apparent wavelength of such patterns depends not only on the wave frequency, but also on the angle between the ICESat ground track and the azimuth of the wave crests. We present a preliminary analysis of such patterns which appears to be consistent with a simple geometrical model.
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Diffusion of Super-Gaussian Profiles
ERIC Educational Resources Information Center
Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.
2007-01-01
The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Chen, Tianju; Zhang, Jinzhi; Wu, Jinhu
2016-07-01
The kinetic and energy productions of pyrolysis of a lignocellulosic biomass were investigated using a three-parallel Gaussian distribution method in this work. The pyrolysis experiment of the pine sawdust was performed using a thermogravimetric-mass spectroscopy (TG-MS) analyzer. A three-parallel Gaussian distributed activation energy model (DAEM)-reaction model was used to describe thermal decomposition behaviors of the three components, hemicellulose, cellulose and lignin. The first, second and third pseudocomponents represent the fractions of hemicellulose, cellulose and lignin, respectively. It was found that the model is capable of predicting the pyrolysis behavior of the pine sawdust. The activation energy distribution peaks for the three pseudo-components were centered at 186.8, 197.5 and 203.9kJmol(-1) for the pine sawdust, respectively. The evolution profiles of H2, CH4, CO, and CO2 were well predicted using the three-parallel Gaussian distribution model. In addition, the chemical composition of bio-oil was also obtained by pyrolysis-gas chromatography/mass spectrometry instrument (Py-GC/MS). Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
Geographically weighted regression model on poverty indicator
NASA Astrophysics Data System (ADS)
Slamet, I.; Nugroho, N. F. T. A.; Muslich
2017-12-01
In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
Spatially Controlled Relay Beamforming
NASA Astrophysics Data System (ADS)
Kalogerias, Dionysios
This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant parameters, which completely determine the mean and covariance of the underlying Gaussian measure. This model is relatively simple to describe, and can be sufficiently characterized, at least for our purposes, both statistically and topologically. Additionally, the model is rather versatile and there is existing experimental evidence, supporting its practical applicability. Our contributions are summarized in properly formulating the whole spatiotemporal model in a completely rigorous mathematical setting, under a convenient measure theoretic framework. Such framework greatly facilitates formulation of meaningful stochastic control problems, where the wireless channel field (or a function of it) can be regarded as a stochastic optimization surface.. 2. Conditionally Gaussian Fields, when conditioned on a Markovian channel state: This is a completely novel approach to wireless channel modeling. In this approach, the communication medium is assumed to behave as a partially observable (or hidden) system, where a hidden, global, temporally varying underlying stochastic process, called the channel state, affects the spatial interactions of the actual channel magnitude, evaluated at any set of locations in the plane. More specifically, we assume that, conditioned on the channel state, the wireless channel constitutes an observable, conditionally Gaussian stochastic process. The channel state evolves in time according to a known, possibly non stationary, non Gaussian, low dimensional Markov kernel. Recognizing the intractability of general nonlinear state estimation, we advocate the use of grid based approximate nonlinear filters as an effective and robust means for recursive tracking of the channel state. We also propose a sequential spatiotemporal predictor for tracking the channel gains at any point in time and space, providing real time sequential estimates for the respective channel gain map. In this context, our contributions are multifold. Except for the introduction of the layered channel model previously described, this line of research has resulted in a number of general, asymptotic convergence results, advancing the theory of grid-based approximate nonlinear stochastic filtering. In particular, sufficient conditions, ensuring asymptotic optimality are relaxed, and, at the same time, the mode of convergence is strengthened. Although the need for such results initiated as an attempt to theoretically characterize the performance of the proposed approximate methods for statistical inference, in regard to the proposed channel modeling approach, they turn out to be of fundamental importance in the areas of nonlinear estimation and stochastic control. The experimental validation of the proposed channel model, as well as the related parameter estimation problem, termed as "Markovian Channel Profiling (MCP)", fundamentally important for any practical deployment, are subject of current, ongoing research. Second, adopting the first of the two aforementioned channel modeling approaches, we consider the spatially controlled relay beamforming problem for an AF network with a single source, a single destination, and multiple, controlled at will, relay nodes. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
MOLECULAR GAS VELOCITY DISPERSIONS IN THE ANDROMEDA GALAXY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldú-Primo, Anahi; Schruba, Andreas, E-mail: caldu@mpia.de, E-mail: schruba@mpe.mpg.de
In order to characterize the distribution of molecular gas in spiral galaxies, we study the line profiles of CO (1 – 0) emission in Andromeda, our nearest massive spiral galaxy. We compare observations performed with the IRAM 30 m single-dish telescope and with the CARMA interferometer at a common resolution of 23 arcsec ≈ 85 pc × 350 pc and 2.5 km s{sup −1}. When fitting a single Gaussian component to individual spectra, the line profile of the single dish data is a factor of 1.5 ± 0.4 larger than the interferometric data one. This ratio in line widths ismore » surprisingly similar to the ratios previously observed in two other nearby spirals, NGC 4736 and NGC 5055, but measured at ∼0.5–1 kpc spatial scale. In order to study the origin of the different line widths, we stack the individual spectra in five bins of increasing peak intensity and fit two Gaussian components to the stacked spectra. We find a unique narrow component of FWHM = 7.5 ± 0.4 km s{sup −1} visible in both the single dish and the interferometric data. In addition, a broad component with FWHM = 14.4 ± 1.5 km s{sup −1} is present in the single-dish data, but cannot be identified in the interferometric data. We interpret this additional broad line width component detected by the single dish as a low brightness molecular gas component that is extended on spatial scales >0.5 kpc, and thus filtered out by the interferometer. We search for evidence of line broadening by stellar feedback across a range of star formation rates but find no such evidence on ∼100 pc spatial scale when characterizing the line profile by a single Gaussian component.« less
Gaussian and Lognormal Models of Hurricane Gust Factors
NASA Technical Reports Server (NTRS)
Merceret, Frank
2009-01-01
A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.
Kang; Ih; Kim; Kim
2000-03-01
In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less
An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.
Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe
2014-03-01
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Leading non-Gaussian corrections for diffusion orientation distribution function.
Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali
2014-02-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.
Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function
Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali
2014-01-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143
Abbott, Lauren J.; Stevens, Mark J.
2015-12-22
In this study, a coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil–globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomisticmore » simulations.« less
Bourlier, Christophe
2006-08-20
The emissivity from a stationary random rough surface is derived by taking into account the multiple reflections and the shadowing effect. The model is applied to the ocean surface. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In particular, the emissivity with zero, single, and double reflections are analytically calculated, and each contribution is studied numerically by considering a 1D sea surface observed in the near infrared band. The model is also compared with results computed from a Monte Carlo ray-tracing method.
Modeling of dispersion near roadways based on the vehicle-induced turbulence concept
NASA Astrophysics Data System (ADS)
Sahlodin, Ali M.; Sotudeh-Gharebagh, Rahmat; Zhu, Yifang
A mathematical model is developed for dispersion near roadways by incorporating vehicle-induced turbulence (VIT) into Gaussian dispersion modeling using computational fluid dynamics (CFD). The model is based on the Gaussian plume equation in which roadway is regarded as a series of point sources. The Gaussian dispersion parameters are modified by simulation of the roadway using CFD in order to evaluate turbulent kinetic energy (TKE) as a measure of VIT. The model was evaluated against experimental carbon monoxide concentrations downwind of two major freeways reported in the literature. Good agreements were achieved between model results and the literature data. A significant difference was observed between the model results with and without considering VIT. The difference is rather high for data very close to the freeways. This model, after evaluation with additional data, may be used as a framework for predicting dispersion and deposition from any roadway for different traffic (vehicle type and speed) conditions.
NASA Astrophysics Data System (ADS)
Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.
2006-11-01
The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n =64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs.
Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.
2007-01-01
The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15 kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n=64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs. PMID:17115732
NASA Astrophysics Data System (ADS)
Tyagi, Neha; Cherayil, Binny J.
2018-03-01
The increasingly widespread occurrence in complex fluids of particle motion that is both Brownian and non-Gaussian has recently been found to be successfully modeled by a process (frequently referred to as ‘diffusing diffusivity’) in which the white noise that governs Brownian diffusion is itself stochastically modulated by either Ornstein–Uhlenbeck dynamics or by two-state noise. But the model has so far not been able to account for an aspect of non-Gaussian Brownian motion that is also commonly observed: a non-monotonic decay of the parameter that quantifies the extent of deviation from Gaussian behavior. In this paper, we show that the inclusion of memory effects in the model—via a generalized Langevin equation—can rationalise this phenomenon.
Cosmic microwave background power asymmetry from non-Gaussian modulation.
Schmidt, Fabian; Hui, Lam
2013-01-04
Non-Gaussianity in the inflationary perturbations can couple observable scales to modes of much longer wavelength (even superhorizon), leaving as a signature a large-angle modulation of the observed cosmic microwave background power spectrum. This provides an alternative origin for a power asymmetry that is otherwise often ascribed to a breaking of statistical isotropy. The non-Gaussian modulation effect can be significant even for typical ~10(-5) perturbations while respecting current constraints on non-Gaussianity if the squeezed limit of the bispectrum is sufficiently infrared divergent. Just such a strongly infrared-divergent bispectrum has been claimed for inflation models with a non-Bunch-Davies initial state, for instance. Upper limits on the observed cosmic microwave background power asymmetry place stringent constraints on the duration of inflation in such models.
Degeneracy of energy levels of pseudo-Gaussian oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iacob, Theodor-Felix; Iacob, Felix, E-mail: felix@physics.uvt.ro; Lute, Marina
2015-12-07
We study the main features of the isotropic radial pseudo-Gaussian oscillators spectral properties. This study is made upon the energy levels degeneracy with respect to orbital angular momentum quantum number. In a previous work [6] we have shown that the pseudo-Gaussian oscillators belong to the class of quasi-exactly solvable models and an exact solution has been found.
Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.
2018-07-01
Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.
Bayesian spatial transformation models with applications in neuroimaging data.
Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G
2013-12-01
The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.
The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation
NASA Technical Reports Server (NTRS)
Campbell, Joel
2007-01-01
A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
The SMM model as a boundary value problem using the discrete diffusion equation.
Campbell, Joel
2007-12-01
A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1993-01-01
One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.
NASA Astrophysics Data System (ADS)
Sakhr, Jamal; Nieminen, John M.
2018-03-01
Two decades ago, Wang and Ong, [Phys. Rev. A 55, 1522 (1997)], 10.1103/PhysRevA.55.1522 hypothesized that the local box-counting dimension of a discrete quantum spectrum should depend exclusively on the nearest-neighbor spacing distribution (NNSD) of the spectrum. In this Rapid Communication, we validate their hypothesis by deriving an explicit formula for the local box-counting dimension of a countably-infinite discrete quantum spectrum. This formula expresses the local box-counting dimension of a spectrum in terms of single and double integrals of the NNSD of the spectrum. As applications, we derive an analytical formula for Poisson spectra and closed-form approximations to the local box-counting dimension for spectra having Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE) spacing statistics. In the Poisson and GOE cases, we compare our theoretical formulas with the published numerical data of Wang and Ong and observe excellent agreement between their data and our theory. We also study numerically the local box-counting dimensions of the Riemann zeta function zeros and the alternate levels of GOE spectra, which are often used as numerical models of spectra possessing GUE and GSE spacing statistics, respectively. In each case, the corresponding theoretical formula is found to accurately describe the numerically computed local box-counting dimension.
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
Yang, Sejung; Lee, Byung-Uk
2015-01-01
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138
NASA Astrophysics Data System (ADS)
Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.
2012-04-01
Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Fiori, Aldo; Volpi, Elena; Zarlenga, Antonio; Bohling, Geoffrey C
2015-08-01
The impact of the logconductivity (Y=ln K) distribution fY on transport at the MADE site is analyzed. Our principal interest is in non-Gaussian fY characterized by heavier tails than the Gaussian. Both the logconductivity moments and fY itself are inferred, taking advantage of the detailed measurements of Bohling et al. (2012). The resulting logconductivity distribution displays heavier tails than the Gaussian, although the departure from Gaussianity is not significant. The effect of the logconductivity distribution on the breakthrough curve (BTC) is studied through an analytical, physically based model. It is found that the non-Gaussianity of the MADE logconductivity distribution does not strongly affect the BTC. Counterintuitively, assuming heavier tailed distributions for Y, with same variance, leads to BTCs which are more symmetrical than those for the Gaussian fY, with less pronounced preferential flow. Results indicate that the impact of strongly non-Gaussian, heavy tailed distributions on solute transport in heterogeneous porous formations can be significant, especially in the presence of high heterogeneity, resulting in reduced preferential flow and retarded peak arrivals. Copyright © 2015 Elsevier B.V. All rights reserved.
Determination of Cross-Sectional Area of Focused Picosecond Gaussian Laser Beam
NASA Technical Reports Server (NTRS)
Ledesma, Rodolfo; Fitz-Gerald, James; Palmieri, Frank; Connell, John
2018-01-01
Measurement of the waist diameter of a focused Gaussian-beam at the 1/e(sup 2) intensity, also referred to as spot size, is key to determining the fluence in laser processing experiments. Spot size measurements are also helpful to calculate the threshold energy and threshold fluence of a given material. This work reports an application of a conventional method, by analyzing single laser ablated spots for different laser pulse energies, to determine the cross-sectional area of a focused Gaussian-beam, which has a nominal pulse width of approx. 10 ps. Polished tungsten was used as the target material, due to its low surface roughness and low ablation threshold, to measure the beam waist diameter. From the ablative spot measurements, the ablation threshold fluence of the tungsten substrate was also calculated.
A Non-Gaussian Stock Price Model: Options, Credit and a Multi-Timescale Memory
NASA Astrophysics Data System (ADS)
Borland, L.
We review a recently proposed model of stock prices, based on astatistical feedback model that results in a non-Gaussian distribution of price changes. Applications to option pricing and the pricing of debt is discussed. A generalization to account for feedback effects over multiple timescales is also presented. This model reproduces most of the stylized facts (ie statistical anomalies) observed in real financial markets.
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning
The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2 = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.
An optimal control approach to the design of moving flight simulators
NASA Technical Reports Server (NTRS)
Sivan, R.; Ish-Shalom, J.; Huang, J.-K.
1982-01-01
An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis
2013-06-01
We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less
Gaussian-Beam Laser-Resonator Program
NASA Technical Reports Server (NTRS)
Cross, Patricia L.; Bair, Clayton H.; Barnes, Norman
1989-01-01
Gaussian Beam Laser Resonator Program models laser resonators by use of Gaussian-beam-propagation techniques. Used to determine radii of beams as functions of position in laser resonators. Algorithm used in program has three major components. First, ray-transfer matrix for laser resonator must be calculated. Next, initial parameters of beam calculated. Finally, propagation of beam through optical elements computed. Written in Microsoft FORTRAN (Version 4.01).
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2016-04-01
This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.
NASA Astrophysics Data System (ADS)
Friese, M. E. J.; Rubinsztein-Dunlop, H.; Heckenberg, N. R.; Dearden, E. W.
1996-12-01
A single-beam gradient trap could potentially be used to hold a stylus for scanning force microscopy. With a view to development of this technique, we modeled the optical trap as a harmonic oscillator and therefore characterized it by its force constant. We measured force constants and resonant frequencies for 1 4- m-diameter polystyrene spheres in a single-beam gradient trap using measurements of backscattered light. Force constants were determined with both Gaussian and doughnut laser modes, with powers of 3 and 1 mW, respectively. Typical values for spring constants were measured to be between 10 6 and 4 10 6 N m. The resonant frequencies of trapped particles were measured to be between 1 and 10 kHz, and the rms amplitudes of oscillations were estimated to be around 40 nm. Our results confirm that the use of the doughnut mode for single-beam trapping is more efficient in the axial direction.
Gaussian solitary waves and compactons in Fermi–Pasta–Ulam lattices with Hertzian potentials
James, Guillaume; Pelinovsky, Dmitry
2014-01-01
We consider a class of fully nonlinear Fermi–Pasta–Ulam (FPU) lattices, consisting of a chain of particles coupled by fractional power nonlinearities of order α>1. This class of systems incorporates a classical Hertzian model describing acoustic wave propagation in chains of touching beads in the absence of precompression. We analyse the propagation of localized waves when α is close to unity. Solutions varying slowly in space and time are searched with an appropriate scaling, and two asymptotic models of the chain of particles are derived consistently. The first one is a logarithmic Korteweg–de Vries (KdV) equation and possesses linearly orbitally stable Gaussian solitary wave solutions. The second model consists of a generalized KdV equation with Hölder-continuous fractional power nonlinearity and admits compacton solutions, i.e. solitary waves with compact support. When , we numerically establish the asymptotically Gaussian shape of exact FPU solitary waves with near-sonic speed and analytically check the pointwise convergence of compactons towards the limiting Gaussian profile. PMID:24808748
Chialvo, Ariel A.; Vlcek, Lukas
2014-11-01
We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less
Noise effects in nonlinear biochemical signaling
NASA Astrophysics Data System (ADS)
Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert
2012-01-01
It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.
The Gaussian Laser Angular Distribution in HYDRA's 3D Laser Ray Trace Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepke, Scott M.
In this note, the angular distribution of rays launched by the 3D LZR ray trace package is derived for Gaussian beams (npower==2) with bm model=3±. Beams with bm model=+3 have a nearly at distribution, and beams with bm model=-3 have a nearly linear distribution when the spot size is large compared to the wavelength.
Zhang, Xiaoping; Nieforth, Keith; Lang, Jean-Marie; Rouzier-Panis, Regine; Reynes, Jacques; Dorr, Albert; Kolis, Stanley; Stiles, Mark R; Kinchelow, Tosca; Patel, Indravadan H
2002-07-01
Enfuvirtide (T-20) is the first of a novel class of human immunodeficiency virus (HIV) drugs that block gp41-mediated viral fusion to host cells. The objectives of this study were to develop a structural pharmacokinetic model that would adequately characterize the absorption and disposition of enfuvirtide pharmacokinetics after both intravenous and subcutaneous administration and to evaluate the dose proportionality of enfuvirtide pharmacokinetic parameters at a subcutaneous dose higher than that currently used in phase III studies. Twelve patients with HIV infection received 4 single doses of enfuvirtide separated by a 1-week washout period in an open-label, randomized, 4-way crossover fashion. The doses studied were 90 mg (intravenous) and 45 mg, 90 mg, and 180 mg (subcutaneous). Serial blood samples were collected up to 48 hours after each dose. Plasma enfuvirtide concentrations were measured with use of a validated liquid chromatography-tandem mass spectrometry method. Enfuvirtide plasma concentration-time data after subcutaneous administration were well described by an inverse Gaussian density function-input model linked to a 2-compartment open distribution model with first-order elimination from the central compartment. The model-derived mean pharmacokinetic parameters (+/-SD) were volume of distribution of the central compartment (3.8 +/- 0.8 L), volume of distribution of the peripheral compartment (1.7 +/- 0.6 L), total clearance (1.44 +/- 0.30 L/h), intercompartmental distribution (2.3 +/- 1.1 L/h), bioavailability (89% +/- 11%), and mean absorption time (7.26 hours, 8.65 hours, and 9.79 hours for the 45-mg, 90-mg, and 180-mg dose groups, respectively). The terminal half-life increased from 3.46 to 4.35 hours for the subcutaneous dose range from 45 to 180 mg. An inverse Gaussian density function-input model linked to a 2-compartment open distribution model with first-order elimination from the central compartment was appropriate to describe complex absorption and disposition kinetics of enfuvirtide plasma concentration-time data after subcutaneous administration to patients with HIV infection. Enfuvirtide was nearly completely absorbed from subcutaneous depot, and pharmacokinetic parameters were linear up to a dose of 180 mg in this study.
Robust Gaussian Graphical Modeling via l1 Penalization
Sun, Hokeun; Li, Hongzhe
2012-01-01
Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
Tests for Gaussianity of the MAXIMA-1 cosmic microwave background map.
Wu, J H; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D
2001-12-17
Gaussianity of the cosmological perturbations is one of the key predictions of standard inflation, but it is violated by other models of structure formation such as cosmic defects. We present the first test of the Gaussianity of the cosmic microwave background (CMB) on subdegree angular scales, where deviations from Gaussianity are most likely to occur. We apply the methods of moments, cumulants, the Kolmogorov test, the chi(2) test, and Minkowski functionals in eigen, real, Wiener-filtered, and signal-whitened spaces, to the MAXIMA-1 CMB anisotropy data. We find that the data, which probe angular scales between 10 arcmin and 5 deg, are consistent with Gaussianity. These results show consistency with the standard inflation and place constraints on the existence of cosmic defects.
Paint stripping with high power flattened Gaussian beams
NASA Astrophysics Data System (ADS)
Forbes, Andrew; du Preez, Neil C.; Belyi, Vladimir; Botha, Lourens R.
2009-08-01
In this paper we present results on improved paint stripping performance with an intra-cavity generated Flattened Gaussian Beam (FGB). A resonator with suitable diffractive optical elements was designed in order to produce a single mode flat-top like laser beam as the output. The design was implemented in a TEA CO2 laser outputting more than 5 J per pulse in the desired mode. The FGB showed improved performance in a paint stripping application due to its uniformity of intensity, and high energy extraction from the cavity.
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-01
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p →q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels.
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-21
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p→q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Sound Speed of Primordial Fluctuations in Supergravity Inflation.
Hetz, Alexander; Palma, Gonzalo A
2016-09-02
We study the realization of slow-roll inflation in N=1 supergravities where inflation is the result of the evolution of a single chiral field. When there is only one flat direction in field space, it is possible to derive a single-field effective field theory parametrized by the sound speed c_{s} at which curvature perturbations propagate during inflation. The value of c_{s} is determined by the rate of bend of the inflationary path resulting from the shape of the F-term potential. We show that c_{s} must respect an inequality that involves the curvature tensor of the Kähler manifold underlying supergravity, and the ratio M/H between the mass M of fluctuations ortogonal to the inflationary path, and the Hubble expansion rate H. This inequality provides a powerful link between observational constraints on primordial non-Gaussianity and information about the N=1 supergravity responsible for inflation. In particular, the inequality does not allow for suppressed values of c_{s} (values smaller than c_{s}∼0.4) unless (a) the ratio M/H is of order 1 or smaller, and (b) the fluctuations of mass M affect the propagation of curvature perturbations by inducing on them a nonlinear dispersion relation during horizon crossing. Therefore, if large non-Gaussianity is observed, supergravity models of inflation would be severely constrained.
pKa predictions for proteins, RNAs, and DNAs with the Gaussian dielectric function using DelPhi pKa.
Wang, Lin; Li, Lin; Alexov, Emil
2015-12-01
We developed a Poisson-Boltzmann based approach to calculate the pKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating pKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large pKa shifts of various single point mutations in staphylococcal nuclease (SNase) from pKa-cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa's. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves pKa predictions for buried carboxyl residues. Finally, the pKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. © 2015 Wiley Periodicals, Inc.
Vegetation Monitoring with Gaussian Processes and Latent Force Models
NASA Astrophysics Data System (ADS)
Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David
2017-04-01
Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.
Flexible link functions in nonparametric binary regression with Gaussian process priors.
Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K
2016-09-01
In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.
Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors
Li, Dan; Lin, Lizhen; Dey, Dipak K.
2015-01-01
Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333
NASA Astrophysics Data System (ADS)
Wolfsteiner, Peter; Breuer, Werner
2013-10-01
The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K
2018-02-01
In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems
NASA Astrophysics Data System (ADS)
Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain
2016-08-01
In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces
NASA Astrophysics Data System (ADS)
Fyodorov, Y. V.; Bouchaud, J.-P.
2007-12-01
An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.
Statistical turbulence theory and turbulence phenomenology
NASA Technical Reports Server (NTRS)
Herring, J. R.
1973-01-01
The application of deductive turbulence theory for validity determination of turbulence phenomenology at the level of second-order, single-point moments is considered. Particular emphasis is placed on the phenomenological formula relating the dissipation to the turbulence energy and the Rotta-type formula for the return to isotropy. Methods which deal directly with most or all the scales of motion explicitly are reviewed briefly. The statistical theory of turbulence is presented as an expansion about randomness. Two concepts are involved: (1) a modeling of the turbulence as nearly multipoint Gaussian, and (2) a simultaneous introduction of a generalized eddy viscosity operator.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?
Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R
2018-04-30
Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Biktashev, Vadim N
2014-04-01
We consider a simple mathematical model of gradual Darwinian evolution in continuous time and continuous trait space, due to intraspecific competition for common resource in an asexually reproducing population in constant environment, while far from evolutionary stable equilibrium. The model admits exact analytical solution. In particular, Gaussian distribution of the trait emerges from generic initial conditions.
NASA Astrophysics Data System (ADS)
Rychlik, Igor; Mao, Wengang
2018-02-01
The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
NASA Astrophysics Data System (ADS)
Zhou, Weijun; Hong, Xueren; Xie, Baisong; Yang, Yang; Wang, Li; Tian, Jianmin; Tang, Rongan; Duan, Wenshan
2018-02-01
In order to generate high quality ion beams through a relatively uniform radiation pressure acceleration (RPA) of a common flat foil, a new scheme is proposed to overcome the curve of the target while being radiated by a single transversely Gaussian laser. In this scheme, two matched counterpropagating transversely Gaussian laser pulses, a main pulse and an auxiliary pulse, impinge on the foil target at the meantime. It is found that in the two-dimensional (2D) particle-in-cell (PIC) simulation, by the restraint of the auxiliary laser, the curve of the foil can be effectively suppressed. As a result, a high quality monoenergetic ion beam is generated through an efficient RPA of the foil target. For example, two counterpropagating transversely circularly polarized Gaussian lasers with normalized amplitudes a1=120 and a2=30 , respectively, impinge on the foil target at the meantime, a 1.3 GeV monoenergetic proton beam with high collimation is obtained finally. Furthermore, the effects on the ions acceleration with different parameters of the auxiliary laser are also investigated.
Studies on system and measuring method of far-field beam divergency in near field by Ronchi ruling
NASA Astrophysics Data System (ADS)
Zhou, Chenbo; Yang, Li; Ma, Wenli; Yan, Peiying; Fan, Tianquan; He, Shangfeng
1996-10-01
Up to now, as large as seven times of Rayleigh-range or more is needed in measuring the far-field Gaussian beam divergency. This method is very inconvenient for the determination of the output beam divergency of the industrial product such as He-Ne lasers and the measuring unit will occupy a large space. The measurement and the measuring accuracy will be greatly influenced by the environment. Application of the Ronchi ruling to the measurement of far-field divergency of Gaussian beam in near-field is analyzed in the paper. The theoretical research and the experiments show that this measuring method is convenient in industrial application. The measuring system consists of a precision mechanical unit which scans Gaussian beam with a microdisplaced Ronchi ruling, a signal sampling system, a single-chip microcomputer data processing system and an electronic unit with microprinter output. The characteristics of the system is stable and the repeatability errors of the system are low. The spot size and far-field divergency of visible Gaussian laser beam can be measured with the system.
Foam morphology, frustration and topological defects in a Negatively curved Hele-Shaw geometry
NASA Astrophysics Data System (ADS)
Mughal, Adil; Schroeder-Turk, Gerd; Evans, Myfanwy
2014-03-01
We present preliminary simulations of foams and single bubbles confined in a narrow gap between parallel surfaces. Unlike previous work, in which the bounding surfaces are flat (the so called Hele-Shaw geometry), we consider surfaces with non-vanishing Gaussian curvature. We demonstrate that the curvature of the bounding surfaces induce a geometric frustration in the preferred order of the foam. This frustration can be relieved by the introduction of topological defects (disclinations, dislocations and complex scar arrangements). We give a detailed analysis of these defects for foams confined in curved Hele-Shaw cells and compare our results with exotic honeycombs, built by bees on surfaces of varying Gaussian curvature. Our simulations, while encompassing surfaces of constant Gaussian curvature (such as the sphere and the cylinder), focus on surfaces with negative Gaussian curvature and in particular triply periodic minimal surfaces (such as the Schwarz P-surface and the Schoen's Gyroid surface). We use the results from a sphere-packing algorithm to generate a Voronoi partition that forms the basis of a Surface Evolver simulation, which yields a realistic foam morphology.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
A method for modelling peak signal statistics on a mobile satellite transponder
NASA Technical Reports Server (NTRS)
Bilodeau, Andre; Lecours, Michel; Pelletier, Marcel; Delisle, Gilles Y.
1990-01-01
A simulation method is proposed. The simulation was developed to model the peak duration and energy content of signal peaks in a mobile communication satellite operating in a Frequency Division Multiple Access (FDMA) mode and presents an estimate of those power peaks for a system where the channels are modeled as band limited Gaussian noise, which is taken as a reasonable representation for Amplitude Commanded Single Sideband (ACSSB), Minimum Shift Keying (MSK), or Phase Shift Keying (PSK) modulated signals. The simulation results show that, under this hypothesis, the level of the signal power peaks for 10 percent, 1 percent, and 0.1 percent of the time are well described by a Rayleigh law and that their duration is extremely short and inversely proportional to the total FDM system bandwidth.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Chialvo, Ariel A.; Moucka, Filip; Vlcek, Lukas; ...
2015-03-24
Here we implemented the Gaussian charge-on-spring (GCOS) version of the original self-consistent field implementation of the Gaussian Charge Polarizable water model and test its accuracy to represent the polarization behavior of the original model involving smeared charges and induced dipole moments. Moreover, for that purpose we adapted the recently developed multiple-particle-move (MPM) within the Gibbs and isochoric-isothermal ensembles Monte Carlo methods for the efficient simulation of polarizable fluids. We also assessed the accuracy of the GCOS representation by a direct comparison of the resulting vapor-liquid phase envelope, microstructure, and relevant microscopic descriptors of water polarization along the orthobaric curve againstmore » the corresponding quantities from the actual GCP water model.« less
NASA Astrophysics Data System (ADS)
Martínez-Casado, R.; Vega, J. L.; Sanz, A. S.; Miret-Artés, S.
2007-08-01
The study of diffusion and low-frequency vibrational motions of particles on metal surfaces is of paramount importance; it provides valuable information on the nature of the adsorbate-substrate and substrate-substrate interactions. In particular, the experimental broadening observed in the diffusive peak with increasing coverage is usually interpreted in terms of a dipole-dipole-like interaction among adsorbates via extensive molecular dynamics calculations within the Langevin framework. Here we present an alternative way to interpret this broadening by means of a purely stochastic description, namely the interacting single-adsorbate approximation, where two noise sources are considered: (1) a Gaussian white noise accounting for the surface friction and temperature, and (2) a white shot noise replacing the interaction potential between adsorbates. Standard Langevin numerical simulations for flat and corrugated surfaces (with a separable potential) illustrate the dynamics of Na atoms on a Cu(100) surface which fit fairly well to the analytical expressions issued from simple models (free particle and anharmonic oscillator) when the Gaussian approximation is assumed. A similar broadening is also expected for the frustrated translational mode peaks.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chanyoung; Kim, Nam H.
Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less
In vitro tympanic membrane position identification with a co-axial fiber-optic otoscope
NASA Astrophysics Data System (ADS)
Sundberg, Mikael; Peebo, Markus; Strömberg, Tomas
2011-09-01
Otitis media diagnosis can be assisted by measuring the shape of the tympanic membrane. We have developed an ear speculum for an otoscope, including spatially distributed source and detector optical fibers, to generate source-detector intensity matrices (SDIMs), representing the curvature of surfaces. The surfaces measured were a model ear with a latex membrane and harvested temporal bones including intact tympanic membranes. The position of the tympanic membrane was shifted from retracted to bulging by air pressure and that of the latex membrane by water displacement. The SDIM was normalized utilizing both external (a sheared flat plastic cylinder) and internal references (neutral position of the membrane). Data was fitted to a two-dimensional Gaussian surface representing the shape by its amplitude and offset. Retracted and bulging surfaces were discriminated for the model ear by the sign of the Gaussian amplitude for both internal and external reference normalization. Tympanic membranes were separated after a two-step normalization: first to an external reference, adjusted for the distance between speculum and the surfaces, and second by comparison with an average normally positioned SDIM from tympanic membranes. In conclusion, we have shown that the modified otoscope can discriminate between bulging and retracted tympanic membranes in a single measurement, given a two-step normalization.
Transitions in a genetic transcriptional regulatory system under Lévy motion
Zheng, Yayun; Serdukova, Larissa; Duan, Jinqiao; Kurths, Jürgen
2016-01-01
Based on a stochastic differential equation model for a single genetic regulatory system, we examine the dynamical effects of noisy fluctuations, arising in the synthesis reaction, on the evolution of the transcription factor activator in terms of its concentration. The fluctuations are modeled by Brownian motion and α-stable Lévy motion. Two deterministic quantities, the mean first exit time (MFET) and the first escape probability (FEP), are used to analyse the transitions from the low to high concentration states. A shorter MFET or higher FEP in the low concentration region facilitates such a transition. We have observed that higher noise intensities and larger jumps of the Lévy motion shortens the MFET and thus benefits transitions. The Lévy motion activates a transition from the low concentration region to the non-adjacent high concentration region, while Brownian motion can not induce this phenomenon. There are optimal proportions of Gaussian and non-Gaussian noises, which maximise the quantities MFET and FEP for each concentration, when the total sum of noise intensities are kept constant. Because a weaker stability indicates a higher transition probability, a new geometric concept is introduced to quantify the basin stability of the low concentration region, characterised by the escaping behaviour. PMID:27411445
Joint resonant CMB power spectrum and bispectrum estimation
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin
2016-02-01
We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.
Experimental Observation of Two Features Unexpected from the Classical Theories of Rubber Elasticity
NASA Astrophysics Data System (ADS)
Nishi, Kengo; Fujii, Kenta; Chung, Ung-il; Shibayama, Mitsuhiro; Sakai, Takamasa
2017-12-01
Although the elastic modulus of a Gaussian chain network is thought to be successfully described by classical theories of rubber elasticity, such as the affine and phantom models, verification experiments are largely lacking owing to difficulties in precisely controlling of the network structure. We prepared well-defined model polymer networks experimentally, and measured the elastic modulus G for a broad range of polymer concentrations and connectivity probabilities, p . In our experiment, we observed two features that were distinct from those predicted by classical theories. First, we observed the critical behavior G ˜|p -pc|1.95 near the sol-gel transition. This scaling law is different from the prediction of classical theories, but can be explained by analogy between the electric conductivity of resistor networks and the elasticity of polymer networks. Here, pc is the sol-gel transition point. Furthermore, we found that the experimental G -p relations in the region above C* did not follow the affine or phantom theories. Instead, all the G /G0-p curves fell onto a single master curve when G was normalized by the elastic modulus at p =1 , G0. We show that the effective medium approximation for Gaussian chain networks explains this master curve.
Nuclear DNA contents of Echinchloa crus-galli and its Gaussian relationships with environments
NASA Astrophysics Data System (ADS)
Li, Dan-Dan; Lu, Yong-Liang; Guo, Shui-Liang; Yin, Li-Ping; Zhou, Ping; Lou, Yu-Xia
2017-02-01
Previous studies on plant nuclear DNA content variation and its relationships with environmental gradients produced conflicting results. We speculated that the relationships between nuclear DNA content of a widely-distributed species and its environmental gradients might be non-linear if it was sampled in a large geographical gradient. Echinochloa crus-galli (L.) P. Beauv. is a worldwide species, but without documents on its intraspecific variation of nuclear DNA content. Our objectives are: 1) to detect intraspecific variation scope of E. crus-galli in its nuclear DNA content, and 2) to testify whether nuclear DNA content of the species changes with environmental gradients following Gaussian models if its populations were sampled in a large geographical gradient. We collected seeds of 36 Chinese populations of E. crus-galli across a wide geographical gradient, and sowed them in a homogeneous field to get their offspring to determine their nuclear DNA content. We analyzed the relationships of nuclear DNA content of these populations with latitude, longitude, and nineteen bioclimatic variables by using Gaussian and linear models. (1) Nuclear DNA content varied from 2.113 to 2.410 pg among 36 Chinese populations of E. crus-galli, with a mean value of 2.256 pg. (2) Gaussian correlations of nuclear DNA content (y) with geographical gradients were detected, with latitude (x) following y = 2.2923*e -(x - 24.9360)2/2*63.79452 (r = 0.546, P < 0.001), and with longitude (x) following y = 2.2933*e -(x - 116.1801)2/2*44.74502 (r = 0.672, P < 0.001). (3) Among the nineteen bioclimatic variables, except temperature isothermality, precipitations of the wettest month, the wettest quarter and the warmest quarter, the others could be better fit with nuclear DNA content by using Gaussian models than by linear models. There exists intra-specific variation among 36 Chinese populations of E. crus-galli, Gaussian models could be applied to fit the correlations of its Nuclear DNA content with geographical and most bioclimatic gradients.
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.
Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes
NASA Astrophysics Data System (ADS)
Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.
2012-12-01
Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
TRACING CO-REGULATORY NETWORK DYNAMICS IN NOISY, SINGLE-CELL TRANSCRIPTOME TRAJECTORIES.
Cordero, Pablo; Stuart, Joshua M
2017-01-01
The availability of gene expression data at the single cell level makes it possible to probe the molecular underpinnings of complex biological processes such as differentiation and oncogenesis. Promising new methods have emerged for reconstructing a progression 'trajectory' from static single-cell transcriptome measurements. However, it remains unclear how to adequately model the appreciable level of noise in these data to elucidate gene regulatory network rewiring. Here, we present a framework called Single Cell Inference of MorphIng Trajectories and their Associated Regulation (SCIMITAR) that infers progressions from static single-cell transcriptomes by employing a continuous parametrization of Gaussian mixtures in high-dimensional curves. SCIMITAR yields rich models from the data that highlight genes with expression and co-expression patterns that are associated with the inferred progression. Further, SCIMITAR extracts regulatory states from the implicated trajectory-evolvingco-expression networks. We benchmark the method on simulated data to show that it yields accurate cell ordering and gene network inferences. Applied to the interpretation of a single-cell human fetal neuron dataset, SCIMITAR finds progression-associated genes in cornerstone neural differentiation pathways missed by standard differential expression tests. Finally, by leveraging the rewiring of gene-gene co-expression relations across the progression, the method reveals the rise and fall of co-regulatory states and trajectory-dependent gene modules. These analyses implicate new transcription factors in neural differentiation including putative co-factors for the multi-functional NFAT pathway.
Probing primordial non-Gaussianity via iSW measurements with SKA continuum surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier, E-mail: alvise@jhu.edu, E-mail: olivier.dore@caltech.edu; Bacon, David J.
The Planck CMB experiment has delivered the best constraints so far on primordial non-Gaussianity, ruling out early-Universe models of inflation that generate large non-Gaussianity. Although small improvements in the CMB constraints are expected, the next frontier of precision will come from future large-scale surveys of the galaxy distribution. The advantage of such surveys is that they can measure many more modes than the CMB—in particular, forthcoming radio surveys with the Square Kilometre Array will cover huge volumes. Radio continuum surveys deliver the largest volumes, but with the disadvantage of no redshift information. In order to mitigate this, we use twomore » additional observables. First, the integrated Sachs-Wolfe effect—the cross-correlation of the radio number counts with the CMB temperature anisotropies—helps to reduce systematics on the large scales that are sensitive to non-Gaussianity. Second, optical data allows for cross-identification in order to gain some redshift information. We show that, while the single redshift bin case can provide a σ(f{sub NL}) ∼ 20, and is therefore not competitive with current and future constraints on non-Gaussianity, a tomographic analysis could improve the constraints by an order of magnitude, even with only two redshift bins. A huge improvement is provided by the addition of high-redshift sources, so having cross-ID for high-z galaxies and an even higher-z radio tail is key to enabling very precise measurements of f{sub NL}. We use Fisher matrix forecasts to predict the constraining power in the case of no redshift information and the case where cross-ID allows a tomographic analysis, and we show that the constraints do not improve much with 3 or more bins. Our results show that SKA continuum surveys could provide constraints competitive with CMB and forthcoming optical surveys, potentially allowing a measurement of σ(f{sub NL}) ∼ 1 to be made. Moreover, these measurements would act as a useful check of results obtained with other probes at other redshift ranges with other methods.« less
NASA Astrophysics Data System (ADS)
Huang, D.; Liu, Y.
2014-12-01
The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Following a trend with an exponential moving average: Analytical results for a Gaussian model
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Serror, Jeremy
2014-01-01
We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Transverse parton momenta in single inclusive hadron production in e+ e- annihilation processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boglione, M.; Gonzalez-Hernandez, J. O.; Taghavi, R.
Here, we study the transverse momentum distributions of single inclusive hadron production in e+e- annihilation processes. Although the only available experimental data are scarce and quite old, we find that the fundamental features of transverse momentum dependent (TMD) evolution, historically addressed in Drell–Yan processes and, more recently, in Semi-inclusive deep inelastic scattering processes, are visible in e+e- annihilations as well. Interesting effects related to its non-perturbative regime can be observed. We test two different parameterizations for the p more » $$\\perp$$ dependence of the cross section: the usual Gaussian distribution and a power-law model. We find the latter to be more appropriate in describing this particular set of experimental data, over a relatively large range of p $$\\perp$$ values. We use this model to map some of the features of the data within the framework of TMD evolution, and discuss the caveats of this and other possible interpretations, related to the one-dimensional nature of the available experimental data.« less
Transverse parton momenta in single inclusive hadron production in e+ e- annihilation processes
Boglione, M.; Gonzalez-Hernandez, J. O.; Taghavi, R.
2017-06-17
Here, we study the transverse momentum distributions of single inclusive hadron production in e+e- annihilation processes. Although the only available experimental data are scarce and quite old, we find that the fundamental features of transverse momentum dependent (TMD) evolution, historically addressed in Drell–Yan processes and, more recently, in Semi-inclusive deep inelastic scattering processes, are visible in e+e- annihilations as well. Interesting effects related to its non-perturbative regime can be observed. We test two different parameterizations for the p more » $$\\perp$$ dependence of the cross section: the usual Gaussian distribution and a power-law model. We find the latter to be more appropriate in describing this particular set of experimental data, over a relatively large range of p $$\\perp$$ values. We use this model to map some of the features of the data within the framework of TMD evolution, and discuss the caveats of this and other possible interpretations, related to the one-dimensional nature of the available experimental data.« less
Annotating novel genes by integrating synthetic lethals and genomic information
Schöner, Daniel; Kalisch, Markus; Leisner, Christian; Meier, Lukas; Sohrmann, Marc; Faty, Mahamadou; Barral, Yves; Peter, Matthias; Gruissem, Wilhelm; Bühlmann, Peter
2008-01-01
Background Large scale screening for synthetic lethality serves as a common tool in yeast genetics to systematically search for genes that play a role in specific biological processes. Often the amounts of data resulting from a single large scale screen far exceed the capacities of experimental characterization of every identified target. Thus, there is need for computational tools that select promising candidate genes in order to reduce the number of follow-up experiments to a manageable size. Results We analyze synthetic lethality data for arp1 and jnm1, two spindle migration genes, in order to identify novel members in this process. To this end, we use an unsupervised statistical method that integrates additional information from biological data sources, such as gene expression, phenotypic profiling, RNA degradation and sequence similarity. Different from existing methods that require large amounts of synthetic lethal data, our method merely relies on synthetic lethality information from two single screens. Using a Multivariate Gaussian Mixture Model, we determine the best subset of features that assign the target genes to two groups. The approach identifies a small group of genes as candidates involved in spindle migration. Experimental testing confirms the majority of our candidates and we present she1 (YBL031W) as a novel gene involved in spindle migration. We applied the statistical methodology also to TOR2 signaling as another example. Conclusion We demonstrate the general use of Multivariate Gaussian Mixture Modeling for selecting candidate genes for experimental characterization from synthetic lethality data sets. For the given example, integration of different data sources contributes to the identification of genetic interaction partners of arp1 and jnm1 that play a role in the same biological process. PMID:18194531
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
Improved Scheme of Modified Gaussian Deconvolution for Reflectance Spectra of Lunar Soils
NASA Technical Reports Server (NTRS)
Hiroi, T.; Pieters, C. M.; Noble, S. K.
2000-01-01
In our continuing effort for deconvolving reflectance spectra of lunar soils using the modified Gaussian model, a new scheme has been developed, including a new form of continuum. All the parameters are optimized with certain constraints.
Burnette, Dylan T; Sengupta, Prabuddha; Dai, Yuhai; Lippincott-Schwartz, Jennifer; Kachar, Bechara
2011-12-27
Superresolution imaging techniques based on the precise localization of single molecules, such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), achieve high resolution by fitting images of single fluorescent molecules with a theoretical Gaussian to localize them with a precision on the order of tens of nanometers. PALM/STORM rely on photoactivated proteins or photoswitching dyes, respectively, which makes them technically challenging. We present a simple and practical way of producing point localization-based superresolution images that does not require photoactivatable or photoswitching probes. Called bleaching/blinking assisted localization microscopy (BaLM), the technique relies on the intrinsic bleaching and blinking behaviors characteristic of all commonly used fluorescent probes. To detect single fluorophores, we simply acquire a stream of fluorescence images. Fluorophore bleach or blink-off events are detected by subtracting from each image of the series the subsequent image. Similarly, blink-on events are detected by subtracting from each frame the previous one. After image subtractions, fluorescence emission signals from single fluorophores are identified and the localizations are determined by fitting the fluorescence intensity distribution with a theoretical Gaussian. We also show that BaLM works with a spectrum of fluorescent molecules in the same sample. Thus, BaLM extends single molecule-based superresolution localization to samples labeled with multiple conventional fluorescent probes.
Strong subadditivity for log-determinant of covariance matrices and its applications
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Simon, R.
2016-08-01
We prove that the log-determinant of the covariance matrix obeys the strong subadditivity inequality for arbitrary tripartite states of multimode continuous variable quantum systems. This establishes general limitations on the distribution of information encoded in the second moments of canonically conjugate operators. The inequality is shown to be stronger than the conventional strong subadditivity inequality for von Neumann entropy in a class of pure tripartite Gaussian states. We finally show that such an inequality implies a strict monogamy-type constraint for joint Einstein-Podolsky-Rosen steerability of single modes by Gaussian measurements performed on multiple groups of modes.
Non-Markovianity in the collision model with environmental block
NASA Astrophysics Data System (ADS)
Jin, Jiasen; Yu, Chang-shui
2018-05-01
We present an extended collision model to simulate the dynamics of an open quantum system. In our model, the unit to represent the environment is, instead of a single particle, a block which consists of a number of environment particles. The introduced blocks enable us to study the effects of different strategies of system–environment interactions and states of the blocks on the non-Markovianities. We demonstrate our idea in the Gaussian channels of an all-optical system and derive a necessary and sufficient condition of non-Markovianity for such channels. Moreover, we show the equivalence of our criterion to the non-Markovian quantum jump in the simulation of the pure damping process of a single-mode field. We also show that the non-Markovianity of the channel working in the strategy that the system collides with environmental particles in each block in a certain order will be affected by the size of the block and the embedded entanglement and the effects of heating and squeezing the vacuum environmental state will quantitatively enhance the non-Markovianity.
The effective theory of shift-symmetric cosmologies
NASA Astrophysics Data System (ADS)
Finelli, Bernardo; Goon, Garrett; Pajer, Enrico; Santoni, Luca
2018-05-01
A shift symmetry is a ubiquitous ingredient in inflationary models, both in effective constructions and in UV-finite embeddings such as string theory. It has also been proposed to play a key role in certain Dark Energy and Dark Matter models. Despite the crucial role it plays in cosmology, the observable, model independent consequences of a shift symmetry are yet unknown. Here, assuming an exact shift symmetry, we derive these consequences for single-clock cosmologies within the framework of the Effective Field Theory of Inflation. We find an infinite set of relations among the otherwise arbitrary effective coefficients, which relate non-Gaussianity to their time dependence. For example, to leading order in derivatives, these relations reduce the infinitely many free functions in the theory to just a single one. Our Effective Theory of shift-symmetric cosmologies describes, among other systems, perfect and imperfect superfluids coupled to gravity and driven superfluids in the decoupling limit. Our results are the first step to determine observationally whether a shift symmetry is at play in the laws of nature and whether it is broken by quantum gravity effects.
On the robustness of the q-Gaussian family
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio; Rodríguez, Antonio; Tsallis, Constantino
2015-12-01
We introduce three deformations, called α-, β- and γ-deformation respectively, of a N-body probabilistic model, first proposed by Rodríguez et al. (2008), having q-Gaussians as N → ∞ limiting probability distributions. The proposed α- and β-deformations are asymptotically scale-invariant, whereas the γ-deformation is not. We prove that, for both α- and β-deformations, the resulting deformed triangles still have q-Gaussians as limiting distributions, with a value of q independent (dependent) on the deformation parameter in the α-case (β-case). In contrast, the γ-case, where we have used the celebrated Q-numbers and the Gauss binomial coefficients, yields other limiting probability distribution functions, outside the q-Gaussian family. These results suggest that scale-invariance might play an important role regarding the robustness of the q-Gaussian family.
Purity of Gaussian states: Measurement schemes and time evolution in noisy channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio
2003-07-01
We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermalmore » baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state.« less
NASA Astrophysics Data System (ADS)
Gillen-Christandl, Katharina; Frazer, Travis D.
2017-04-01
The standing wave of two identical counter-propagating Gaussian laser beams constitutes a 1D array of bright spots that can serve as traps for single neutral atoms for quantum information operations. Detuning the frequency of one of the beams causes the array to start moving, effectively forming a conveyor belt for the qubits. Using a pair of nested Gaussian laser beams with different beam waists, however, forms a standing wave with a 1D array of dark spot traps confined in all dimensions. We have computationally explored the trap properties and limitations of this configuration and, trading off trap depth and frequencies with the number of traps and trap photon scattering rates, we determined the laser powers and beam waists needed for useful 1D arrays of dark spot traps for trapping and transporting atomic qubits in neutral atom quantum computing platforms.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Semisupervised Gaussian Process for Automated Enzyme Search.
Mellor, Joseph; Grigoras, Ioana; Carbonell, Pablo; Faulon, Jean-Loup
2016-06-17
Synthetic biology is today harnessing the design of novel and greener biosynthesis routes for the production of added-value chemicals and natural products. The design of novel pathways often requires a detailed selection of enzyme sequences to import into the chassis at each of the reaction steps. To address such design requirements in an automated way, we present here a tool for exploring the space of enzymatic reactions. Given a reaction and an enzyme the tool provides a probability estimate that the enzyme catalyzes the reaction. Our tool first considers the similarity of a reaction to known biochemical reactions with respect to signatures around their reaction centers. Signatures are defined based on chemical transformation rules by using extended connectivity fingerprint descriptors. A semisupervised Gaussian process model associated with the similar known reactions then provides the probability estimate. The Gaussian process model uses information about both the reaction and the enzyme in providing the estimate. These estimates were validated experimentally by the application of the Gaussian process model to a newly identified metabolite in Escherichia coli in order to search for the enzymes catalyzing its associated reactions. Furthermore, we show with several pathway design examples how such ability to assign probability estimates to enzymatic reactions provides the potential to assist in bioengineering applications, providing experimental validation to our proposed approach. To the best of our knowledge, the proposed approach is the first application of Gaussian processes dealing with biological sequences and chemicals, the use of a semisupervised Gaussian process framework is also novel in the context of machine learning applied to bioinformatics. However, the ability of an enzyme to catalyze a reaction depends on the affinity between the substrates of the reaction and the enzyme. This affinity is generally quantified by the Michaelis constant KM. Therefore, we also demonstrate using Gaussian process regression to predict KM given a substrate-enzyme pair.
Gaussian noise and time-reversal symmetry in nonequilibrium Langevin models.
Vainstein, M H; Rubí, J M
2007-03-01
We show that in driven systems the Gaussian nature of the fluctuating force and time reversibility are equivalent properties. This result together with the potential condition of the external force drastically restricts the form of the probability distribution function, which can be shown to satisfy time-independent relations. We have corroborated this feature by explicitly analyzing a model for the stretching of a polymer and a model for a suspension of noninteracting Brownian particles in steady flow.
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Campbell, G. S.; Ganzer, V. M.; Joppa, R. G.
1974-01-01
A method is described for generating time histories which model the frequency content and certain non-Gaussian probability characteristics of atmospheric turbulence including the large gusts and patchy nature of turbulence. Methods for time histories using either analog or digital computation are described. A STOL airplane was programmed into a 6-degree-of-freedom flight simulator, and turbulence time histories from several atmospheric turbulence models were introduced. The pilots' reactions are described.
Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models
NASA Astrophysics Data System (ADS)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.
Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-15
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yi; Xue, Wei, E-mail: yw366@cam.ac.uk, E-mail: wei.xue@sissa.it
We study the tilt of the primordial gravitational waves spectrum. A hint of blue tilt is shown from analyzing the BICEP2 and POLARBEAR data. Motivated by this, we explore the possibilities of blue tensor spectra from the very early universe cosmology models, including null energy condition violating inflation, inflation with general initial conditions, and string gas cosmology, etc. For the simplest G-inflation, blue tensor spectrum also implies blue scalar spectrum. In general, the inflation models with blue tensor spectra indicate large non-Gaussianities. On the other hand, string gas cosmology predicts blue tensor spectrum with highly Gaussian fluctuations. If further experimentsmore » do confirm the blue tensor spectrum, non-Gaussianity becomes a distinguishing test between inflation and alternatives.« less
NASA Astrophysics Data System (ADS)
Liu, Xin; Sanner, Nicolas; Sentis, Marc; Stoian, Razvan; Zhao, Wei; Cheng, Guanghua; Utéza, Olivier
2018-02-01
Single-shot Gaussian-Bessel laser beams of 1 ps pulse duration and of 0.9 μm core size and 60 μm depth of focus are used for drilling micro-channels on front side of fused silica in ambient condition. Channels ablated at different pulse energies are fully characterized by AFM and post-processing polishing procedures. We identify experimental energy conditions (typically 1.5 µJ) suitable to fabricate non-tapered channels with mean diameter of 1.2 µm and length of 40 μm while maintaining an utmost quality of the front opening of the channels. In addition, by further applying accurate post-polishing procedure, channels with high surface quality and moderate aspect ratio down to a few units are accessible, which would find interest in the surface micro-structuring of materials, with perspective of further scalability to meta-material specifications.
Phonon arithmetic in a trapped ion system
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Lv, Dingshun; Lu, Yao; An, Shuoming; Zhang, Jing-Ning; Nha, Hyunchul; Kim, M. S.; Kim, Kihwan
2016-04-01
Single-quantum level operations are important tools to manipulate a quantum state. Annihilation or creation of single particles translates a quantum state to another by adding or subtracting a particle, depending on how many are already in the given state. The operations are probabilistic and the success rate has yet been low in their experimental realization. Here we experimentally demonstrate (near) deterministic addition and subtraction of a bosonic particle, in particular a phonon of ionic motion in a harmonic potential. We realize the operations by coupling phonons to an auxiliary two-level system and applying transitionless adiabatic passage. We show handy repetition of the operations on various initial states and demonstrate by the reconstruction of the density matrices that the operations preserve coherences. We observe the transformation of a classical state to a highly non-classical one and a Gaussian state to a non-Gaussian one by applying a sequence of operations deterministically.
A generalized non-Gaussian consistency relation for single field inflation
NASA Astrophysics Data System (ADS)
Bravo, Rafael; Mooij, Sander; Palma, Gonzalo A.; Pradenas, Bastián
2018-05-01
We show that a perturbed inflationary spacetime, driven by a canonical single scalar field, is invariant under a special class of coordinate transformations together with a field reparametrization of the curvature perturbation in co-moving gauge. This transformation may be used to derive the squeezed limit of the 3-point correlation function of the co-moving curvature perturbations valid in the case that these do not freeze after horizon crossing. This leads to a generalized version of Maldacena's non-Gaussian consistency relation in the sense that the bispectrum squeezed limit is completely determined by spacetime diffeomorphisms. Just as in the case of the standard consistency relation, this result may be understood as the consequence of how long-wavelength modes modulate those of shorter wavelengths. This relation allows one to derive the well known violation to the consistency relation encountered in ultra slow-roll, where curvature perturbations grow exponentially after horizon crossing.
Matching optics for Gaussian beams
NASA Technical Reports Server (NTRS)
Gunter, William D. (Inventor)
1991-01-01
A system of matching optics for Gaussian beams is described. The matching optics system is positioned between a light beam emitter (such as a laser) and the input optics of a second optics system whereby the output from the light beam emitter is converted into an optimum input for the succeeding parts of the second optical system. The matching optics arrangement includes the combination of a light beam emitter, such as a laser with a movable afocal lens pair (telescope) and a single movable lens placed in the laser's output beam. The single movable lens serves as an input to the telescope. If desired, a second lens, which may be fixed, is positioned in the beam before the adjustable lens to serve as an input processor to the movable lens. The system provides the ability to choose waist diameter and position independently and achieve the desired values with two simple adjustments not requiring iteration.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Single-shot measurement of nonlinear absorption and nonlinear refraction.
Jayabalan, J; Singh, Asha; Oak, Shrikant M
2006-06-01
A single-shot method for measurement of nonlinear optical absorption and refraction is described and analyzed. A spatial intensity variation of an elliptical Gaussian beam in conjugation with an array detector is the key element of this method. The advantages of this single-shot technique were demonstrated by measuring the two-photon absorption and free-carrier absorption in GaAs as well as the nonlinear refractive index of CS2 using a modified optical Kerr setup.
Koch, Peter; Ruebel, Felix; Bartschke, Juergen; L'huillier, Johannes A
2015-11-20
We demonstrate a continuous wave single-frequency laser at 671.1 nm based on a high-power 888 nm pumped Nd:YVO4 ring laser at 1342.2 nm. Unidirectional operation of the fundamental ring laser is achieved with the injection-locking technique. A Nd:YVO4 microchip laser serves as the injecting seed source, providing a tunable single-frequency power of up to 40 mW. The ring laser emits a single-frequency power of 17.2 W with a Gaussian beam profile and a beam propagation factor of M2<1.1. A 60-mm-long periodically poled MgO-doped LiNbO3 crystal is used to generate the second harmonic in a single-pass scheme. Up to 5.7 W at 671.1 nm with a Gaussian shaped beam profile and a beam propagation factor of M2<1.2 are obtained, which is approximately twice the power of previously reported lasers. This work opens possibilities in cold atoms experiments with lithium, allowing the use of larger ensembles in magneto-optical traps or higher diffraction orders in atomic beam interferometers.
Chaudret, Robin; Gresh, Nohad; Narth, Christophe; Lagardère, Louis; Darden, Thomas A; Cisneros, G Andrés; Piquemal, Jean-Philip
2014-09-04
We demonstrate as a proof of principle the capabilities of a novel hybrid MM'/MM polarizable force field to integrate short-range quantum effects in molecular mechanics (MM) through the use of Gaussian electrostatics. This lead to a further gain in accuracy in the representation of the first coordination shell of metal ions. It uses advanced electrostatics and couples two point dipole polarizable force fields, namely, the Gaussian electrostatic model (GEM), a model based on density fitting, which uses fitted electronic densities to evaluate nonbonded interactions, and SIBFA (sum of interactions between fragments ab initio computed), which resorts to distributed multipoles. To understand the benefits of the use of Gaussian electrostatics, we evaluate first the accuracy of GEM, which is a pure density-based Gaussian electrostatics model on a test Ca(II)-H2O complex. GEM is shown to further improve the agreement of MM polarization with ab initio reference results. Indeed, GEM introduces nonclassical effects by modeling the short-range quantum behavior of electric fields and therefore enables a straightforward (and selective) inclusion of the sole overlap-dependent exchange-polarization repulsive contribution by means of a Gaussian damping function acting on the GEM fields. The S/G-1 scheme is then introduced. Upon limiting the use of Gaussian electrostatics to metal centers only, it is shown to be able to capture the dominant quantum effects at play on the metal coordination sphere. S/G-1 is able to accurately reproduce ab initio total interaction energies within closed-shell metal complexes regarding each individual contribution including the separate contributions of induction, polarization, and charge-transfer. Applications of the method are provided for various systems including the HIV-1 NCp7-Zn(II) metalloprotein. S/G-1 is then extended to heavy metal complexes. Tested on Hg(II) water complexes, S/G-1 is shown to accurately model polarization up to quadrupolar response level. This opens up the possibility of embodying explicit scalar relativistic effects in molecular mechanics thanks to the direct transferability of ab initio pseudopotentials. Therefore, incorporating GEM-like electron density for a metal cation enable the introduction of nonambiguous short-range quantum effects within any point-dipole based polarizable force field without the need of an extensive parametrization.