A method for selective excitation of Ince-Gaussian modes in an end-pumped solid-state laser
NASA Astrophysics Data System (ADS)
Lei, J.; Hu, A.; Wang, Y.; Chen, P.
2014-12-01
A method for selective excitation of Ince-Gaussian modes is presented. The method is based on the spatial distributions of Ince-Gaussian modes as well as the transverse mode selection theory. Significant diffraction loss is introduced in a resonator by using opaque lines at zero-intensity positions, and this loss allows to excite a specific mode; we call this method "loss control." We study the method by means of numerical simulation of a half-symmetric laser resonator. The simulated field is represented by angular spectrum of the plane waves representation, and its changes are calculated by the two-dimensional fast Fourier transform algorithm when it passes through the optical elements and propagates back and forth in the resonator. The output lasing modes of our method have an overlap of over 90 % with the target Ince-Gaussian modes. The method will be beneficial to the further study of properties and potential applications of Ince-Gaussian modes.
Vilseck, Jonah Z.; Kostal, Jakub; Tirado-Rives, Julian; Jorgensen, William L.
2015-01-01
Hybrid quantum mechanics and molecular mechanics (QM/MM) computer simulations have become an indispensable tool for studying chemical and biological phenomena for systems too large to treat with quantum mechanics alone. For several decades, semi-empirical QM methods have been used in QM/MM simulations. However, with increased computational resources, the introduction of ab initio and density function methods into on-the-fly QM/MM simulations is being increasingly preferred. This adaptation can be accomplished with a program interface that tethers independent QM and MM software packages. This report introduces such an interface for the BOSS and Gaussian programs, featuring modification of BOSS to request QM energies and partial atomic charges from Gaussian. A customizable C-shell linker script facilitates the inter-program communication. The BOSS–Gaussian interface also provides convenient access to Charge Model 5 (CM5) partial atomic charges for multiple purposes including QM/MM studies of reactions. In this report, the BOSS–Gaussian interface is applied to a nitroaldol (Henry) reaction and two methyl transfer reactions in aqueous solution. Improved agreement with experiment is found by determining free-energy surfaces with MP2/CM5 QM/MM simulations than previously reported investigations employing semiempirical methods. PMID:26311531
Vilseck, Jonah Z; Kostal, Jakub; Tirado-Rives, Julian; Jorgensen, William L
2015-10-15
Hybrid quantum mechanics and molecular mechanics (QM/MM) computer simulations have become an indispensable tool for studying chemical and biological phenomena for systems too large to treat with QM alone. For several decades, semiempirical QM methods have been used in QM/MM simulations. However, with increased computational resources, the introduction of ab initio and density function methods into on-the-fly QM/MM simulations is being increasingly preferred. This adaptation can be accomplished with a program interface that tethers independent QM and MM software packages. This report introduces such an interface for the BOSS and Gaussian programs, featuring modification of BOSS to request QM energies and partial atomic charges from Gaussian. A customizable C-shell linker script facilitates the interprogram communication. The BOSS-Gaussian interface also provides convenient access to Charge Model 5 (CM5) partial atomic charges for multiple purposes including QM/MM studies of reactions. In this report, the BOSS-Gaussian interface is applied to a nitroaldol (Henry) reaction and two methyl transfer reactions in aqueous solution. Improved agreement with experiment is found by determining free-energy surfaces with MP2/CM5 QM/MM simulations than previously reported investigations using semiempirical methods. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.
2016-06-01
In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less
A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.
Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian
2017-05-01
This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
NASA Astrophysics Data System (ADS)
Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing
2018-05-01
We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
NASA Astrophysics Data System (ADS)
Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang
2018-03-01
Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Campbell, G. S.; Ganzer, V. M.; Joppa, R. G.
1974-01-01
A method is described for generating time histories which model the frequency content and certain non-Gaussian probability characteristics of atmospheric turbulence including the large gusts and patchy nature of turbulence. Methods for time histories using either analog or digital computation are described. A STOL airplane was programmed into a 6-degree-of-freedom flight simulator, and turbulence time histories from several atmospheric turbulence models were introduced. The pilots' reactions are described.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Simulation of time series by distorted Gaussian processes
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1977-01-01
Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, J. Berian, E-mail: berian@berkeley.edu
2012-05-20
We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity thatmore » appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.« less
Kang; Ih; Kim; Kim
2000-03-01
In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Non-Gaussian Methods for Causal Structure Learning.
Shimizu, Shohei
2018-05-22
Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.
Gaussian vs non-Gaussian turbulence: impact on wind turbine loads
NASA Astrophysics Data System (ADS)
Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.
2014-12-01
In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Gaussian theory for spatially distributed self-propelled particles
NASA Astrophysics Data System (ADS)
Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza
2016-12-01
Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G
2017-07-01
This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
NASA Astrophysics Data System (ADS)
Wolfsteiner, Peter; Breuer, Werner
2013-10-01
The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.
García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G
2017-08-01
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Borland, M.
Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less
AUTONOMOUS GAUSSIAN DECOMPOSITION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, R.; Dickerson, M.A.; Peterson, K.R.
Two numerical models for the calculation of air concentration and ground deposition of airborne effluent releases are compared. The Particle-in-Cell (PIC) model and the Straight-Line Airflow Gaussian model were used for the simulation. Two sites were selected for comparison: the Hudson River Valley, New York, and the area around the Savannah River Plant, South Carolina. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions withmore » better agreement between the two methods during neutral to unstable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model.« less
Gaussian Accelerated Molecular Dynamics: Theory, Implementation, and Applications
Miao, Yinglong; McCammon, J. Andrew
2018-01-01
A novel Gaussian Accelerated Molecular Dynamics (GaMD) method has been developed for simultaneous unconstrained enhanced sampling and free energy calculation of biomolecules. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of the biomolecules. Furthermore, by constructing a boost potential that follows a Gaussian distribution, accurate reweighting of GaMD simulations is achieved via cumulant expansion to the second order. The free energy profiles obtained from GaMD simulations allow us to identify distinct low energy states of the biomolecules and characterize biomolecular structural dynamics quantitatively. In this chapter, we present the theory of GaMD, its implementation in the widely used molecular dynamics software packages (AMBER and NAMD), and applications to the alanine dipeptide biomolecular model system, protein folding, biomolecular large-scale conformational transitions and biomolecular recognition. PMID:29720925
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
Nonlinear scalar forcing based on a reaction analogy
NASA Astrophysics Data System (ADS)
Daniel, Don; Livescu, Daniel
2017-11-01
We present a novel reaction analogy (RA) based forcing method for generating stationary passive scalar fields in incompressible turbulence. The new method can produce more general scalar PDFs (e.g. double-delta) than current methods, while ensuring that scalar fields remain bounded, unlike existent forcing methodologies that can potentially violate naturally existing bounds. Such features are useful for generating initial fields in non-premixed combustion or for studying non-Gaussian scalar turbulence. The RA method mathematically models hypothetical chemical reactions that convert reactants in a mixed state back into its pure unmixed components. Various types of chemical reactions are formulated and the corresponding mathematical expressions derived. For large values of the scalar dissipation rate, the method produces statistically steady double-delta scalar PDFs. Gaussian scalar statistics are recovered for small values of the scalar dissipation rate. In contrast, classical forcing methods consistently produce unimodal Gaussian scalar fields. The ability of the new method to produce fully developed scalar fields is discussed using 2563, 5123, and 10243 periodic box simulations.
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D
2016-10-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.
Response of MDOF strongly nonlinear systems to fractional Gaussian noises.
Deng, Mao-Lin; Zhu, Wei-Qiu
2016-08-01
In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.
Response of MDOF strongly nonlinear systems to fractional Gaussian noises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn
2016-08-15
In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.
Cost effectiveness of the stream-gaging program in North Dakota
Ryan, Gerald L.
1989-01-01
This report documents results of a cost-effectiveness study of the stream-gaging program In North Dakota. It is part of a nationwide evaluation of the stream-gaging program of the U.S. Geological Survey.One phase of evaluating cost effectiveness is to identify less costly alternative methods of simulating streamflow records. Statistical or hydro logic flow-routing methods were used as alternative methods to simulate streamflow records for 21 combinations of gaging stations from the 94-gaging-station network. Accuracy of the alternative methods was sufficient to consider discontinuing only one gaging station.Operation of the gaging-station network was evaluated by using associated uncertainty in streamflow records. The evaluation was limited to the nonwinter operation of 29 gaging stations in eastern North Dakota. The current (1987) travel routes and measurement frequencies require a budget of about $248/000 and result in an average equivalent Gaussian spread in streamflow records of 16.5 percent. Changes in routes and measurement frequencies optimally could reduce the average equivalent Gaussian spread to 14.7 percent.Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget would increase the optimal average equivalent Gaussian spread from 14.7 to 20.4 percent, and a $400,000 budget could decrease it to 5.8 percent.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
The effect of optically active turbulence on Gaussian laser beams in the ocean
NASA Astrophysics Data System (ADS)
Nootz, G.; Matt, S.; Jarosz, E.; Hou, W.
2016-02-01
Motivated by the high resolution and data transfer potential, optical imaging and communication methods are intensely investigated for marine applications. The majority of research focuses on overcoming the strong scattering of light by particles present in the ocean. However when operating in very clear water the limiting factor for such applications can be the strongly forward biased scattering from optically active turbulent layers. For this presentation the effect of optically active turbulence on focused Gaussian beams has been studied in the field, in a controlled laboratory test tank, and by numerical simulations. For the field experiments a telescoping rigid underwater sensor structure (TRUSS) was deployed in the Bahamas equipped with a diffractive optics element projecting a matrix of beams towards a fast beam profiler. Image processing techniques are used to extract the beam wander and beam breathing. The results are compared to theoretical values for the optical turbulence strength derived from the measured temperature microstructure at the test side. Laboratory and simulated experiments are carried out in a physical and numerical Rayleigh-Benard convection turbulence tank of the same geometry. A focused Gaussian laser beam is propagated through the test tank and recorded with a camera from the back side of a diffuser. Similarly, a focused Gaussian beam is propagated numerically by means of split-step Fourier method through the simulated turbulence environment. Results will be presented for weak to moderate turbulence as they are most typical for oceanic conditions. Conclusions about the effect on optical imaging and communication applications will be discussed.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Recovering dark-matter clustering from galaxies with Gaussianization
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Neyrinck, Mark; Norberg, Peder; Cole, Shaun
2016-04-01
The Gaussianization transform has been proposed as a method to remove the issues of scale-dependent galaxy bias and non-linearity from galaxy clustering statistics, but these benefits have yet to be thoroughly tested for realistic galaxy samples. In this paper, we test the effectiveness of the Gaussianization transform for different galaxy types by applying it to realistic simulated blue and red galaxy samples. We show that in real space, the shapes of the Gaussianized power spectra of both red and blue galaxies agree with that of the underlying dark matter, with the initial power spectrum, and with each other to smaller scales than do the statistics of the usual (untransformed) density field. However, we find that the agreement in the Gaussianized statistics breaks down in redshift space. We attribute this to the fact that red and blue galaxies exhibit very different fingers of god in redshift space. After applying a finger-of-god compression, the agreement on small scales between the Gaussianized power spectra is restored. We also compare the Gaussianization transform to the clipped galaxy density field and find that while both methods are effective in real space, they have more complicated behaviour in redshift space. Overall, we find that Gaussianization can be useful in recovering the shape of the underlying dark-matter power spectrum to k ˜ 0.5 h Mpc-1 and of the initial power spectrum to k ˜ 0.4 h Mpc-1 in certain cases at z = 0.
From plane waves to local Gaussians for the simulation of correlated periodic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de
2016-08-28
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Yang, Hao; Cheng, Jian; Chen, Mingjun; Wang, Jian; Liu, Zhichao; An, Chenhui; Zheng, Yi; Hu, Kehui; Liu, Qi
2017-07-24
In high power laser systems, precision micro-machining is an effective method to mitigate the laser-induced surface damage growth on potassium dihydrogen phosphate (KDP) crystal. Repaired surfaces with smooth spherical and Gaussian contours can alleviate the light field modulation caused by damage site. To obtain the optimal repairing structure parameters, finite element method (FEM) models for simulating the light intensification caused by the mitigation pits on rear KDP surface were established. The light intensity modulation of these repairing profiles was compared by changing the structure parameters. The results indicate the modulation is mainly caused by the mutual interference between the reflected and incident lights on the rear surface. Owing to the total reflection, the light intensity enhancement factors (LIEFs) of the spherical and Gaussian mitigation pits sharply increase when the width-depth ratios are near 5.28 and 3.88, respectively. To achieve the optimal mitigation effect, the width-depth ratios greater than 5.3 and 4.3 should be applied to the spherical and Gaussian repaired contours. Particularly, for the cases of width-depth ratios greater than 5.3, the spherical repaired contour is preferred to achieve lower light intensification. The laser damage test shows that when the width-depth ratios are larger than 5.3, the spherical repaired contour presents higher laser damage resistance than that of Gaussian repaired contour, which agrees well with the simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Movahed, M. Sadegh; Khosravi, Shahram, E-mail: m.s.movahed@ipm.ir, E-mail: khosravi@ipm.ir
2011-03-01
In this paper we study the footprint of cosmic string as the topological defects in the very early universe on the cosmic microwave background radiation. We develop the method of level crossing analysis in the context of the well-known Kaiser-Stebbins phenomenon for exploring the signature of cosmic strings. We simulate a Gaussian map by using the best fit parameter given by WMAP-7 and then superimpose cosmic strings effects on it as an incoherent and active fluctuations. In order to investigate the capability of our method to detect the cosmic strings for the various values of tension, Gμ, a simulated puremore » Gaussian map is compared with that of including cosmic strings. Based on the level crossing analysis, the superimposed cosmic string with Gμ∼>4 × 10{sup −9} in the simulated map without instrumental noise and the resolution R = 1' could be detected. In the presence of anticipated instrumental noise the lower bound increases just up to Gμ∼>5.8 × 10{sup −9}.« less
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
NASA Astrophysics Data System (ADS)
Rytchkov, D. S.
2017-11-01
The paper presents the results of a study of the backscattering enhancement factor (BSE) dependence of vortex LaguerreGaussian beams propagating on monostatic location paths in the atmosphere on optical turbulence intensity. The numeric simulation split-step method of laser beam propagation was used to obtain BSE factor values of a laser beam propagated on monostatic location path in the turbulent atmosphere and reflected from a diffuse target. It is shown that BSE factor of the averaged intensity of a backscattered vortex laser beam of any topological charge is less than BSE factor values of backscattered Gaussian beam in arbitrary turbulent conditions.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Gaussian Filtering with Tapered Oil-Filled Photonic Bandgap Fibers
NASA Astrophysics Data System (ADS)
Brunetti, A. C.; Scolari, L.; Weirich, J.; Eskildsen, L.; Bellanca, G.; Bassi, P.; Bjarklev, A.
2008-10-01
A tunable Gaussian filter based on a tapered oil-filled photonic crystal fiber is demonstrated. The filter is centered at λ = 1364 nm with a bandwidth (FWHM) of 237nm. Tunability is achieved by changing the temperature of the filter. A shift of 210nm of the central wavelength has been observed by increasing the temperature from 25 °C to 100 °C. The measurements are compared to a simulated spectrum obtained by means of a vectorial Beam Propagation Method model.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon
2017-05-01
A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.
Testing local anisotropy using the method of smoothed residuals I — methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appleby, Stephen; Shafieloo, Arman, E-mail: stephen.appleby@apctp.org, E-mail: arman@apctp.org
2014-03-01
We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect amore » signal. Issues related to the width of the Gaussian smoothing are also discussed.« less
NASA Technical Reports Server (NTRS)
Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew
2013-01-01
High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.
Robust Gaussian Graphical Modeling via l1 Penalization
Sun, Hokeun; Li, Hongzhe
2012-01-01
Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
Denoising of polychromatic CT images based on their own noise properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less
Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion
NASA Astrophysics Data System (ADS)
Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan
2016-08-01
The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2018-01-01
We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.
Propagation of elliptic-Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Deng, Dongmei; Guo, Qi
2011-10-01
The propagation of the elliptic-Gaussian beams is studied in strongly nonlocal nonlinear media. The elliptic-Gaussian beams and elliptic-Gaussian vortex beams are obtained analytically and numerically. The patterns of the elegant Ince-Gaussian and the generalized Ince-Gaussian beams are varied periodically when the input power is equal to the critical power. The stability is verified by perturbing the initial beam by noise. By simulating the propagation of the elliptic-Gaussian beams in liquid crystal, we find that when the mode order is not big enough, there exists the quasi-elliptic-Gaussian soliton states.
Analysis of Digital Communication Signals and Extraction of Parameters.
1994-12-01
Fast Fourier Transform (FFT). The correlation methods utilize modified time-frequency distributions , where one of these is based on the Wigner - Ville ... Distribution ( WVD ). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNRs).
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
White, Alexander James; Tretiak, Sergei; Mozyrsky, Dima V.
2016-04-25
Accurate simulation of the non-adiabatic dynamics of molecules in excited electronic states is key to understanding molecular photo-physical processes. Here we present a novel method, based on a semiclassical approximation, that is as efficient as the commonly used mean field Ehrenfest or ad hoc surface hopping methods and properly accounts for interference and decoherence effects. This novel method is an extension of Heller's thawed Gaussian wave-packet dynamics that includes coupling between potential energy surfaces. By studying several standard test problems we demonstrate that the accuracy of the method can be systematically improved while maintaining high efficiency. The method is suitablemore » for investigating the role of quantum coherence in the non-adiabatic dynamics of many-atom molecules.« less
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Saltzman, Erica J; Schweizer, Kenneth S
2006-12-01
Brownian trajectory simulation methods are employed to fully establish the non-Gaussian fluctuation effects predicted by our nonlinear Langevin equation theory of single particle activated dynamics in glassy hard-sphere fluids. The consequences of stochastic mobility fluctuations associated with the space-time complexities of the transient localization and barrier hopping processes have been determined. The incoherent dynamic structure factor was computed for a range of wave vectors and becomes of an increasingly non-Gaussian form for volume fractions beyond the (naive) ideal mode coupling theory (MCT) transition. The non-Gaussian parameter (NGP) amplitude increases markedly with volume fraction and is well described by a power law in the maximum restoring force of the nonequilibrium free energy profile. The time scale associated with the NGP peak becomes much smaller than the alpha relaxation time for systems characterized by significant entropic barriers. An alternate non-Gaussian parameter that probes the long time alpha relaxation process displays a different shape, peak intensity, and time scale of its maximum. However, a strong correspondence between the classic and alternate NGP amplitudes is predicted which suggests a deep connection between the early and final stages of cage escape. Strong space-time decoupling emerges at high volume fractions as indicated by a nondiffusive wave vector dependence of the relaxation time and growth of the translation-relaxation decoupling parameter. Displacement distributions exhibit non-Gaussian behavior at intermediate times, evolving into a strongly bimodal form with slow and fast subpopulations at high volume fractions. Qualitative and semiquantitative comparisons of the theoretical results with colloid experiments, ideal MCT, and multiple simulation studies are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.; Feng, Heng
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Kong, Bo; Fox, Rodney O.; Feng, Heng; ...
2017-02-16
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Simulations of Gaussian electron guns for RHIC electron lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pikin, A.
Simulations of two versions of the electron gun for RHIC electron lens are presented. The electron guns have to generate an electron beam with Gaussian radial profile of the electron beam density. To achieve the Gaussian electron emission profile on the cathode we used a combination of the gun electrodes and shaping of the cathode surface. Dependence of electron gun performance parameters on the geometry of electrodes and the margins for electrodes positioning are presented.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Remazeilles, M.; Chluba, J.
2018-07-01
Correlations between cosmic microwave background (CMB) temperature, polarization, and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, and PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separations. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.
Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Remazeilles, M.; Chluba, J.
2018-04-01
Correlations between cosmic microwave background (CMB) temperature, polarization and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separation. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.
Probing Primordial Non-Gaussianity with Weak-lensing Minkowski Functionals
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi; Nishimichi, Takahiro
2012-11-01
We study the cosmological information contained in the Minkowski functionals (MFs) of weak gravitational lensing convergence maps. We show that the MFs provide strong constraints on the local-type primordial non-Gaussianity parameter f NL. We run a set of cosmological N-body simulations and perform ray-tracing simulations of weak lensing to generate 100 independent convergence maps of a 25 deg2 field of view for f NL = -100, 0 and 100. We perform a Fisher analysis to study the degeneracy among other cosmological parameters such as the dark energy equation of state parameter w and the fluctuation amplitude σ8. We use fully nonlinear covariance matrices evaluated from 1000 ray-tracing simulations. For upcoming wide-field observations such as those from the Subaru Hyper Suprime-Cam survey with a proposed survey area of 1500 deg2, the primordial non-Gaussianity can be constrained with a level of f NL ~ 80 and w ~ 0.036 by weak-lensing MFs. If simply scaled by the effective survey area, a 20,000 deg2 lensing survey using the Large Synoptic Survey Telescope will yield constraints of f NL ~ 25 and w ~ 0.013. We show that these constraints can be further improved by a tomographic method using source galaxies in multiple redshift bins.
NASA Astrophysics Data System (ADS)
Richings, Gareth W.; Habershon, Scott
2018-04-01
We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.
NASA Astrophysics Data System (ADS)
Guda, A. A.; Guda, S. A.; Soldatov, M. A.; Lomachenko, K. A.; Bugaev, A. L.; Lamberti, C.; Gawelda, W.; Bressler, C.; Smolentsev, G.; Soldatov, A. V.; Joly, Y.
2016-05-01
Finite difference method (FDM) implemented in the FDMNES software [Phys. Rev. B, 2001, 63, 125120] was revised. Thorough analysis shows, that the calculated diagonal in the FDM matrix consists of about 96% zero elements. Thus a sparse solver would be more suitable for the problem instead of traditional Gaussian elimination for the diagonal neighbourhood. We have tried several iterative sparse solvers and the direct one MUMPS solver with METIS ordering turned out to be the best. Compared to the Gaussian solver present method is up to 40 times faster and allows XANES simulations for complex systems already on personal computers. We show applicability of the software for metal-organic [Fe(bpy)3]2+ complex both for low spin and high spin states populated after laser excitation.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.
Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie
2018-01-01
In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Zhou, Nan; Wang, Jian
2018-05-23
Bessel-Gaussian beams have distinct properties of suppressed diffraction divergence and self-reconstruction. In this paper, we propose and simulate metasurface-assisted orbital angular momentum (OAM) carrying Bessel-Gaussian laser. The laser can be regarded as a Fabry-Perot cavity formed by one partially transparent output plane mirror and the other metasurface-based reflector mirror. The gain medium of Nd:YVO 4 enables the lasing wavelength at 1064 nm with a 808 nm laser serving as the pump. The sub-wavelength structure of metasurface facilitates flexible spatial light manipulation. The compact metasurface-based reflector provides combined phase functions of an axicon and a spherical mirror. By appropriately selecting the size of output mirror and inserting mode-selection element in the laser cavity, different orders of OAM-carrying Bessel-Gaussian lasing modes are achievable. The lasing Bessel-Gaussian 0 , Bessel-Gaussian 01 + , Bessel-Gaussian 02 + and Bessel-Gaussian 03 + modes have high fidelities of ~0.889, ~0.889, ~0.881 and ~0.879, respectively. The metasurface fabrication tolerance and the dependence of threshold power and output lasing power on the length of gain medium, beam radius of pump and transmittance of output mirror are also discussed. The obtained results show successful implementation of metasurface-assisted OAM-carrying Bessel-Gaussian laser with favorable performance. The metasurface-assisted OAM-carrying Bessel-Gaussian laser may find wide OAM-enabled communication and non-communication applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruixing; Yang, LV; Xu, Kele
Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Siu-Siu, Guo; Qingxuan, Shi
2017-03-01
In this paper, single-degree-of-freedom (SDOF) systems combined to Gaussian white noise and Gaussian/non-Gaussian colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations with four-coupled first-order differential equations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions, especially the ones in the tail regions of the PDFs. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis. Hopefully, our present work could provide insights into the investigation of structures under random loadings.
2011-04-01
experiments was performed using an artificial neural network to try to capture the nonlinearities. The radial Gaussian artificial neural network system...Modeling Blast-Wave Propagation using Artificial Neural Network Methods‖, in International Journal of Advanced Engineering Informatics, Elsevier
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
Reversible wavefront shaping between Gaussian and Airy beams by mimicking gravitational field
NASA Astrophysics Data System (ADS)
Wang, Xiangyang; Liu, Hui; Sheng, Chong; Zhu, Shining
2018-02-01
In this paper, we experimentally demonstrate reversible wavefront shaping through mimicking gravitational field. A gradient-index micro-structured optical waveguide with special refractive index profile was constructed whose effective index satisfying a gravitational field profile. Inside the waveguide, an incident broad Gaussian beam is firstly transformed into an accelerating beam, and the generated accelerating beam is gradually changed back to a Gaussian beam afterwards. To validate our experiment, we performed full-wave continuum simulations that agree with the experimental results. Furthermore, a theoretical model was established to describe the evolution of the laser beam based on Landau’s method, showing that the accelerating beam behaves like the Airy beam in the small range in which the linear potential approaches zero. To our knowledge, such a reversible wavefront shaping technique has not been reported before.
Chialvo, Ariel A.; Moucka, Filip; Vlcek, Lukas; ...
2015-03-24
Here we implemented the Gaussian charge-on-spring (GCOS) version of the original self-consistent field implementation of the Gaussian Charge Polarizable water model and test its accuracy to represent the polarization behavior of the original model involving smeared charges and induced dipole moments. Moreover, for that purpose we adapted the recently developed multiple-particle-move (MPM) within the Gibbs and isochoric-isothermal ensembles Monte Carlo methods for the efficient simulation of polarizable fluids. We also assessed the accuracy of the GCOS representation by a direct comparison of the resulting vapor-liquid phase envelope, microstructure, and relevant microscopic descriptors of water polarization along the orthobaric curve againstmore » the corresponding quantities from the actual GCP water model.« less
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
Dynamical Casimir Effect for Gaussian Boson Sampling.
Peropadre, Borja; Huh, Joonsuk; Sabín, Carlos
2018-02-28
We show that the Dynamical Casimir Effect (DCE), realized on two multimode coplanar waveg-uide resonators, implements a gaussian boson sampler (GBS). The appropriate choice of the mirror acceleration that couples both resonators translates into the desired initial gaussian state and many-boson interference in a boson sampling network. In particular, we show that the proposed quantum simulator naturally performs a classically hard task, known as scattershot boson sampling. Our result unveils an unprecedented computational power of DCE, and paves the way for using DCE as a resource for quantum simulation.
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
Recent Applications of Higher-Order Spectral Analysis to Nonlinear Aeroelastic Phenomena
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hajj, Muhammad R.; Dunn, Shane; Strganac, Thomas W.; Powers, Edward J.; Stearman, Ronald
2005-01-01
Recent applications of higher-order spectral (HOS) methods to nonlinear aeroelastic phenomena are presented. Applications include the analysis of data from a simulated nonlinear pitch and plunge apparatus and from F-18 flight flutter tests. A MATLAB model of the Texas A&MUniversity s Nonlinear Aeroelastic Testbed Apparatus (NATA) is used to generate aeroelastic transients at various conditions including limit cycle oscillations (LCO). The Gaussian or non-Gaussian nature of the transients is investigated, related to HOS methods, and used to identify levels of increasing nonlinear aeroelastic response. Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed. The data includes high-quality measurements of forced responses and LCO phenomena. Standard power spectral density (PSD) techniques and HOS methods are applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
NASA Astrophysics Data System (ADS)
Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping
2015-01-01
As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic
YOKOYAMA, Jun’ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231
NASA Astrophysics Data System (ADS)
Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.
2017-12-01
The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid
2017-02-01
The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
Separating Gravitational Wave Signals from Instrument Artifacts
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Cornish, Neil J.
2010-01-01
Central to the gravitational wave detection problem is the challenge of separating features in the data produced by astrophysical sources from features produced by the detector. Matched filtering provides an optimal solution for Gaussian noise, but in practice, transient noise excursions or "glitches" complicate the analysis. Detector diagnostics and coincidence tests can be used to veto many glitches which may otherwise be misinterpreted as gravitational wave signals. The glitches that remain can lead to long tails in the matched filter search statistics and drive up the detection threshold. Here we describe a Bayesian approach that incorporates a more realistic model for the instrument noise allowing for fluctuating noise levels that vary independently across frequency bands, and deterministic "glitch fitting" using wavelets as "glitch templates", the number of which is determined by a trans-dimensional Markov chain Monte Carlo algorithm. We demonstrate the method's effectiveness on simulated data containing low amplitude gravitational wave signals from inspiraling binary black hole systems, and simulated non-stationary and non-Gaussian noise comprised of a Gaussian component with the standard LIGO/Virgo spectrum, and injected glitches of various amplitude, prevalence, and variety. Glitch fitting allows us to detect significantly weaker signals than standard techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Martin, Daniel R; Matyushov, Dmitry V
2012-08-30
We show that electrostatic fluctuations of the protein-water interface are globally non-Gaussian. The electrostatic component of the optical transition energy (energy gap) in a hydrated green fluorescent protein is studied here by classical molecular dynamics simulations. The distribution of the energy gap displays a high excess in the breadth of electrostatic fluctuations over the prediction of the Gaussian statistics. The energy gap dynamics include a nanosecond component. When simulations are repeated with frozen protein motions, the statistics shifts to the expectations of linear response and the slow dynamics disappear. We therefore suggest that both the non-Gaussian statistics and the nanosecond dynamics originate largely from global, low-frequency motions of the protein coupled to the interfacial water. The non-Gaussian statistics can be experimentally verified from the temperature dependence of the first two spectral moments measured at constant-volume conditions. Simulations at different temperatures are consistent with other indicators of the non-Gaussian statistics. In particular, the high-temperature part of the energy gap variance (second spectral moment) scales linearly with temperature and extrapolates to zero at a temperature characteristic of the protein glass transition. This result, violating the classical limit of the fluctuation-dissipation theorem, leads to a non-Boltzmann statistics of the energy gap and corresponding non-Arrhenius kinetics of radiationless electronic transitions, empirically described by the Vogel-Fulcher-Tammann law.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
[Spectral properties of light migration in apple fruit tissue].
Sun, Teng-Fei; Zhang, Teng-Teng; Zheng, Tian-Tian; Cao, Zeng-Hui; Zhang, Jun
2013-11-01
The present paper simulates laser wavelength 632 and 750 nm Gaussian beam migration in apple fruit tissue using Monte-Carlo method, and researches the spectral properties of absorption and scattering. It was shown that the special energy distribution characteristics of Gaussian beam influenced the diffusion of the laser in the tissue, the reflection, absorption and transmittance of 750 nm by tissue are lower, there are more photons interacting with tissue within the tissue, and they can more clearly reflect the information within the tissue. So, the transmission characteristics of the infrared light were relatively strong in biology tissue, which was convenient for researching biology tissue.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Li, Zhaoyang; Kurita, Takashi; Miyanaga, Noriaki
2017-10-20
Zigzag and non-zigzag beam waist shifts in a multiple-pass zigzag slab amplifier are investigated based on the propagation of a Gaussian beam. Different incident angles in the zigzag and non-zigzag planes would introduce a direction-dependent waist-shift-difference, which distorts the beam quality in both the near- and far-fields. The theoretical model and analytical expressions of this phenomenon are presented, and intensity distributions in the two orthogonal planes are simulated and compared. A geometrical optics compensation method by a beam with 90° rotation is proposed, which not only could correct the direction-dependent waist-shift-difference but also possibly average the traditional thermally induced wavefront-distortion-difference between the horizontal and vertical beam directions.
Decoupling of rotational and translational diffusion in supercooled colloidal fluids
Edmond, Kazem V.; Elsesser, Mark T.; Hunter, Gary L.; Pine, David J.; Weeks, Eric R.
2012-01-01
We use confocal microscopy to directly observe 3D translational and rotational diffusion of tetrahedral clusters, which serve as tracers in colloidal supercooled fluids. We find that as the colloidal glass transition is approached, translational and rotational diffusion decouple from each other: Rotational diffusion remains inversely proportional to the growing viscosity whereas translational diffusion does not, decreasing by a much lesser extent. We quantify the rotational motion with two distinct methods, finding agreement between these methods, in contrast with recent simulation results. The decoupling coincides with the emergence of non-Gaussian displacement distributions for translation whereas rotational displacement distributions remain Gaussian. Ultimately, our work demonstrates that as the glass transition is approached, the sample can no longer be approximated as a continuum fluid when considering diffusion. PMID:23071311
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Modelling the excitation field of an optical resonator
NASA Astrophysics Data System (ADS)
Romanini, Daniele
2014-06-01
Assuming the paraxial approximation, we derive efficient recursive expressions for the projection coefficients of a Gaussian beam over the Gauss--Hermite transverse electro-magnetic (TEM) modes of an optical cavity. While previous studies considered cavities with cylindrical symmetry, our derivation accounts for "simple" astigmatism and ellipticity, which allows to deal with more realistic optical systems. The resulting expansion of the Gaussian beam over the cavity TEM modes provides accurate simulation of the excitation field distribution inside the cavity, in transmission, and in reflection. In particular, this requires including counter-propagating TEM modes, usually neglected in textbooks. As an illustrative application to a complex case, we simulate reentrant cavity configurations where Herriott spots are obtained at cavity output. We show that the case of an astigmatic cavity is also easily modelled. To our knowledge, such relevant applications are usually treated under the simplified geometrical optics approximation, or using heavier numerical methods.
THE DISTRIBUTION OF COOK’S D STATISTIC
Muller, Keith E.; Mok, Mario Chen
2013-01-01
Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Simulation of separated flow past a bluff body using Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ghia, K. N.; Ghia, U.; Osswald, G. A.; Liu, C. A.
1987-01-01
Two-dimensional flow past a bluff body is presently simulated on the basis of an analysis that employs the incompressible, unsteady Navier-Stokes equations in terms of vorticity and stream function. The fully implicit, time-marching, alternating-direction, implicit-block Gaussian elimination used is a direct method with second-order spatial accuracy; this allows it to avoid the introduction of any artificial viscosity. Attention is given to the simulation of flow past a circular cylinder with and without symmetry, requiring the use of either the half or the full cylinder, respectively.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
A Gaussian Approximation Potential for Silicon
NASA Astrophysics Data System (ADS)
Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor
We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.
NASA Astrophysics Data System (ADS)
Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.
Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.
Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows
NASA Astrophysics Data System (ADS)
Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2017-11-01
In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
A method for modeling laterally asymmetric proton beamlets resulting from collimation
Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.
2015-01-01
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
On an additive partial correlation operator and nonparametric estimation of graphical models.
Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu
2016-09-01
We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.
On an additive partial correlation operator and nonparametric estimation of graphical models
Li, Bing; Zhao, Hongyu
2016-01-01
Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689
Divergence-free approach for obtaining decompositions of quantum-optical processes
NASA Astrophysics Data System (ADS)
Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.
2018-02-01
Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.
Detection methods for non-Gaussian gravitational wave stochastic backgrounds
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2003-04-01
A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
On the Bar Pattern Speed Determination of NGC 3367
NASA Astrophysics Data System (ADS)
Gabbasov, R. F.; Repetto, P.; Rosado, M.
2009-09-01
An important dynamic parameter of barred galaxies is the bar pattern speed, Ω P . Among several methods that are used for the determination of Ω P , the Tremaine-Weinberg method has the advantage of model independence and accuracy. In this work, we apply the method to a simulated bar including gas dynamics and study the effect of two-dimensional spectroscopy data quality on robustness of the method. We added white noise and a Gaussian random field to the data and measured the corresponding errors in Ω P . We found that a signal to noise ratio in surface density ~5 introduces errors of ~20% for the Gaussian noise, while for the white noise the corresponding errors reach ~50%. At the same time, the velocity field is less sensitive to contamination. On the basis of the performed study, we applied the method to the NGC 3367 spiral galaxy using Hα Fabry-Pérot interferometry data. We found Ω P = 43 ± 6 km s-1 kpc-1 for this galaxy.
Poisson denoising on the sphere
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.
2009-08-01
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Lin, Chuan-Kai; Wang, Sheng-De
2004-11-01
A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.
Aberration analysis and calculation in system of Gaussian beam illuminates lenslet array
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Zhou, Ping; Su, Tianquan; Feng, Yun; Zhao, Yuejin
2014-09-01
Low order aberration was founded when focused Gaussian beam imaging at Kodak KAI -16000 image detector, which is integrated with lenslet array. Effect of focused Gaussian beam and numerical simulation calculation of the aberration were presented in this paper. First, we set up a model of optical imaging system based on previous experiment. Focused Gaussian beam passed through a pinhole and was received by Kodak KAI -16000 image detector whose microlenses of lenslet array were exactly focused on sensor surface. Then, we illustrated the characteristics of focused Gaussian beam and the effect of relative space position relations between waist of Gaussian beam and front spherical surface of microlenses to the aberration. Finally, we analyzed the main element of low order aberration and calculated the spherical aberration caused by lenslet array according to the results of above two steps. Our theoretical calculations shown that , the numerical simulation had a good agreement with the experimental result. Our research results proved that spherical aberration was the main element and made up about 93.44% of the 48 nm error, which was demonstrated in previous experiment. The spherical aberration is inversely proportional to the value of divergence distance between microlens and waist, and directly proportional to the value of the Gaussian beam waist radius.
An optimal control approach to the design of moving flight simulators
NASA Technical Reports Server (NTRS)
Sivan, R.; Ish-Shalom, J.; Huang, J.-K.
1982-01-01
An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments
2016-01-07
methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating
Gaussian Finite Element Method for Description of Underwater Sound Diffraction
NASA Astrophysics Data System (ADS)
Huang, Dehua
A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.
Shear viscosity for a heated granular binary mixture at low density.
Montanero, José María; Garzó, Vicente
2003-02-01
The shear viscosity for a heated granular binary mixture of smooth hard spheres at low density is analyzed. The mixture is heated by the action of an external driving force (Gaussian thermostat) that exactly compensates for cooling effects associated with the dissipation of collisions. The study is made from the Boltzmann kinetic theory, which is solved by using two complementary approaches. First, a normal solution of the Boltzmann equation via the Chapman-Enskog method is obtained up to first order in the spatial gradients. The mass, heat, and momentum fluxes are determined and the corresponding transport coefficients identified. As in the free cooling case [V. Garzó and J. W. Dufty, Phys. Fluids 14, 1476 (2002)], practical evaluation requires a Sonine polynomial approximation, and here it is mainly illustrated in the case of the shear viscosity. Second, to check the accuracy of the Chapman-Enskog results, the Boltzmann equation is numerically solved by means of the direct simulation Monte Carlo method. The simulation is performed for a system under uniform shear flow, using the Gaussian thermostat to control inelastic cooling. The comparison shows an excellent agreement between theory and simulation over a wide range of values of the restitution coefficients and the parameters of the mixture (masses, concentrations, and sizes).
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Iwabuchi, Sadahiro; Kakazu, Yasuhiro; Koh, Jin-Young; Harata, N Charles
2014-02-15
Images in biomedical imaging research are often affected by non-specific background noise. This poses a serious problem when the noise overlaps with specific signals to be quantified, e.g. for their number and intensity. A simple and effective means of removing background noise is to prepare a filtered image that closely reflects background noise and to subtract it from the original unfiltered image. This approach is in common use, but its effectiveness in identifying and quantifying synaptic puncta has not been characterized in detail. We report on our assessment of the effectiveness of isolating punctate signals from diffusely distributed background noise using one variant of this approach, "Difference of Gaussian(s) (DoG)" which is based on a Gaussian filter. We evaluated immunocytochemically stained, cultured mouse hippocampal neurons as an example, and provided the rationale for choosing specific parameter values for individual steps in detecting glutamatergic nerve terminals. The intensity and width of the detected puncta were proportional to those obtained by manual fitting of two-dimensional Gaussian functions to the local information in the original image. DoG was compared with the rolling-ball method, using biological data and numerical simulations. Both methods removed background noise, but differed slightly with respect to their efficiency in discriminating neighboring peaks, as well as their susceptibility to high-frequency noise and variability in object size. DoG will be useful in detecting punctate signals, once its characteristics are examined quantitatively by experimenters. Copyright © 2013 Elsevier B.V. All rights reserved.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
Single-frequency Ince-Gaussian mode operations of laser-diode-pumped microchip solid-state lasers.
Ohtomo, Takayuki; Kamikariya, Koji; Otsuka, Kenju; Chu, Shu-Chun
2007-08-20
Various single-frequency Ince-Gaussian mode oscillations have been achieved in laser-diode-pumped microchip solid-state lasers, including LiNdP(4)O(12) (LNP) and Nd:GdVO(4), by adjusting the azimuthal symmetry of the short laser resonator. Ince-Gaussian modes formed by astigmatic pumping have been reproduced by numerical simulation.
Gaussian Decomposition of Laser Altimeter Waveforms
NASA Technical Reports Server (NTRS)
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
NASA Astrophysics Data System (ADS)
Zelisko, Matthew; Ahmadpoor, Fatemeh; Gao, Huajian; Sharma, Pradeep
2017-08-01
The dominant deformation behavior of two-dimensional materials (bending) is primarily governed by just two parameters: bending rigidity and the Gaussian modulus. These properties also set the energy scale for various important physical and biological processes such as pore formation, cell fission and generally, any event accompanied by a topological change. Unlike the bending rigidity, the Gaussian modulus is, however, notoriously difficult to evaluate via either experiments or atomistic simulations. In this Letter, recognizing that the Gaussian modulus and edge tension play a nontrivial role in the fluctuations of a 2D material edge, we derive closed-form expressions for edge fluctuations. Combined with atomistic simulations, we use the developed approach to extract the Gaussian modulus and edge tension at finite temperatures for both graphene and various types of lipid bilayers. Our results possibly provide the first reliable estimate of this elusive property at finite temperatures and appear to suggest that earlier estimates must be revised. In particular, we show that, if previously estimated properties are employed, the graphene-free edge will exhibit unstable behavior at room temperature. Remarkably, in the case of graphene, we show that the Gaussian modulus and edge tension even change sign at finite temperatures.
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
NASA Astrophysics Data System (ADS)
Hu, D. L.; Liu, X. B.
Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
Cigar-shaped quarkonia under strong magnetic field
NASA Astrophysics Data System (ADS)
Suzuki, Kei; Yoshida, Tetsuya
2016-03-01
Heavy quarkonia in a homogeneous magnetic field are analyzed by using a potential model with constituent quarks. To obtain anisotropic wave functions and corresponding eigenvalues, the cylindrical Gaussian expansion method is applied, where the anisotropic wave functions are expanded by a Gaussian basis in the cylindrical coordinates. Deformation of the wave functions and the mass shifts of the S-wave heavy quarkonia (ηc, J /ψ , ηc(2 S ), ψ (2 S ) and bottomonia) are examined for the wide range of external magnetic field. The spatial structure of the wave functions changes drastically as adjacent energy levels cross each other. Possible observables in heavy-ion collision experiments and future lattice QCD simulations are also discussed.
Foam morphology, frustration and topological defects in a Negatively curved Hele-Shaw geometry
NASA Astrophysics Data System (ADS)
Mughal, Adil; Schroeder-Turk, Gerd; Evans, Myfanwy
2014-03-01
We present preliminary simulations of foams and single bubbles confined in a narrow gap between parallel surfaces. Unlike previous work, in which the bounding surfaces are flat (the so called Hele-Shaw geometry), we consider surfaces with non-vanishing Gaussian curvature. We demonstrate that the curvature of the bounding surfaces induce a geometric frustration in the preferred order of the foam. This frustration can be relieved by the introduction of topological defects (disclinations, dislocations and complex scar arrangements). We give a detailed analysis of these defects for foams confined in curved Hele-Shaw cells and compare our results with exotic honeycombs, built by bees on surfaces of varying Gaussian curvature. Our simulations, while encompassing surfaces of constant Gaussian curvature (such as the sphere and the cylinder), focus on surfaces with negative Gaussian curvature and in particular triply periodic minimal surfaces (such as the Schwarz P-surface and the Schoen's Gyroid surface). We use the results from a sphere-packing algorithm to generate a Voronoi partition that forms the basis of a Surface Evolver simulation, which yields a realistic foam morphology.
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
Persistent homology and non-Gaussianity
NASA Astrophysics Data System (ADS)
Cole, Alex; Shiu, Gary
2018-03-01
In this paper, we introduce the topological persistence diagram as a statistic for Cosmic Microwave Background (CMB) temperature anisotropy maps. A central concept in 'Topological Data Analysis' (TDA), the idea of persistence is to represent a data set by a family of topological spaces. One then examines how long topological features 'persist' as the family of spaces is traversed. We compute persistence diagrams for simulated CMB temperature anisotropy maps featuring various levels of primordial non-Gaussianity of local type. Postponing the analysis of observational effects, we show that persistence diagrams are more sensitive to local non-Gaussianity than previous topological statistics including the genus and Betti number curves, and can constrain Δ fNLloc= 35.8 at the 68% confidence level on the simulation set, compared to Δ fNLloc= 60.6 for the Betti number curves. Given the resolution of our simulations, we expect applying persistence diagrams to observational data will give constraints competitive with those of the Minkowski Functionals. This is the first in a series of papers where we plan to apply TDA to different shapes of non-Gaussianity in the CMB and Large Scale Structure.
NASA Astrophysics Data System (ADS)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2018-05-01
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.
NASA Astrophysics Data System (ADS)
Semchishen, Vladimir A.; Mrochen, Michael; Seminogov, Vladimir N.; Panchenko, Vladislav Y.; Seiler, Theo
1998-04-01
Purpose: The increasing interest in a homogeneous Gaussian light beam profile for applications in ophthalmology e.g. photorefractive keratectomy (PRK) requests simple optical systems with low energy losses. Therefore, we developed the Light Shaping Beam Homogenizer (LSBH) working from UV up to mid-IR. Method: The irregular microlenses structure on a quartz surface was fabricated by using photolithography, chemical etching and chemical polishing processes. This created a three dimensional structure on the quartz substrate characterized in case of a Gaussian beam by random law distribution of individual irregularities tilts. The LSBH was realized for the 193 nm and the 2.94 micrometer wavelengths. Simulation results obtained by 3-D analysis for an arbitrary incident light beam were compared to experimental results. Results: The correlation to a numerical Gaussian fit is better than 94% with high uniformity for an incident beam with an intensity modulation of nearly 100%. In the far field the cross section of the beam shows always rotation symmetry. Transmittance and damage threshold of the LSBH are only dependent on the substrate characteristics. Conclusions: considering our experimental and simulation results it is possible to control the angular distribution of the beam intensity after LSBH with higher efficiency compared to diffraction or holographic optical elements.
NASA Astrophysics Data System (ADS)
Zhou, Bing; Greenhalgh, S. A.
2011-01-01
We present an extension of the 3-D spectral element method (SEM), called the Gaussian quadrature grid (GQG) approach, to simulate in the frequency-domain seismic waves in 3-D heterogeneous anisotropic media involving a complex free-surface topography and/or sub-surface geometry. It differs from the conventional SEM in two ways. The first is the replacement of the hexahedral element mesh with 3-D Gaussian quadrature abscissae to directly sample the physical properties or model parameters. This gives a point-gridded model which more exactly and easily matches the free-surface topography and/or any sub-surface interfaces. It does not require that the topography be highly smooth, a condition required in the curved finite difference method and the spectral method. The second is the derivation of a complex-valued elastic tensor expression for the perfectly matched layer (PML) model parameters for a general anisotropic medium, whose imaginary parts are determined by the PML formulation rather than having to choose a specific class of viscoelastic material. Furthermore, the new formulation is much simpler than the time-domain-oriented PML implementation. The specified imaginary parts of the density and elastic moduli are valid for arbitrary anisotropic media. We give two numerical solutions in full-space homogeneous, isotropic and anisotropic media, respectively, and compare them with the analytical solutions, as well as show the excellent effectiveness of the PML model parameters. In addition, we perform numerical simulations for 3-D seismic waves in a heterogeneous, anisotropic model incorporating a free-surface ridge topography and validate the results against the 2.5-D modelling solution, and demonstrate the capability of the approach to handle realistic situations.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
Benchmark results for few-body hypernuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffino, Fabrizio Ferrari; Lonardoni, Diego; Barnea, Nir
2017-03-16
Here, the Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev–Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body ΛN component of the phenomenological Bodmer–Usmani potential, and a hyperon-nucleon interaction simulating the scattering phase shifts given by NSC97f. The range of applicability of the NSHH method is briefly discussed.
Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z
2017-03-01
Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...
2018-05-15
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; et al.
2018-01-26
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Shen, C; Tian, Z
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Robustifying blind image deblurring methods by simple filters
NASA Astrophysics Data System (ADS)
Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing
2016-07-01
The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.
Gaussian Accelerated Molecular Dynamics in NAMD
2016-01-01
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for “unconstrained” enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules. PMID:28034310
Gaussian Accelerated Molecular Dynamics in NAMD.
Pang, Yui Tik; Miao, Yinglong; Wang, Yi; McCammon, J Andrew
2017-01-10
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for "unconstrained" enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M 3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M 3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca
Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less
Huang, Wenzhu; Zhen, Tengkun; Zhang, Wentao; Zhang, Fusheng; Li, Fang
2015-01-01
Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs). However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs). The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs’ reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method. PMID:25923938
Huang, Wenzhu; Zhen, Tengkun; Zhang, Wentao; Zhang, Fusheng; Li, Fang
2015-04-27
Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs). However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs). The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs' reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less
Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.
Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng
2013-11-20
Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.
Beam distribution reconstruction simulation for electron beam probe
NASA Astrophysics Data System (ADS)
Feng, Yong-Chun; Mao, Rui-Shi; Li, Peng; Kang, Xin-Cai; Yin, Yan; Liu, Tong; You, Yao-Yao; Chen, Yu-Cong; Zhao, Tie-Cheng; Xu, Zhi-Guo; Wang, Yan-Yu; Yuan, You-Jin
2017-07-01
An electron beam probe (EBP) is a detector which makes use of a low-intensity and low-energy electron beam to measure the transverse profile, bunch shape, beam neutralization and beam wake field of an intense beam with small dimensions. While it can be applied to many aspects, we limit our analysis to beam distribution reconstruction. This kind of detector is almost non-interceptive for all of the beam and does not disturb the machine environment. In this paper, we present the theoretical aspects behind this technique for beam distribution measurement and some simulation results of the detector involved. First, a method to obtain a parallel electron beam is introduced and a simulation code is developed. An EBP as a profile monitor for dense beams is then simulated using the fast scan method for various target beam profiles, including KV distribution, waterbag distribution, parabolic distribution, Gaussian distribution and halo distribution. Profile reconstruction from the deflected electron beam trajectory is implemented and compared with the actual profile, and the expected agreement is achieved. Furthermore, as well as fast scan, a slow scan, i.e. step-by-step scan, is considered, which lowers the requirement for hardware, i.e. Radio Frequency deflector. We calculate the three-dimensional electric field of a Gaussian distribution and simulate the electron motion in this field. In addition, a fast scan along the target beam direction and slow scan across the beam are also presented, and can provide a measurement of longitudinal distribution as well as transverse profile simultaneously. As an example, simulation results for the China Accelerator Driven Sub-critical System (CADS) and High Intensity Heavy Ion Accelerator Facility (HIAF) are given. Finally, a potential system design for an EBP is described.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-01
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Two-Relaxation-Time Lattice Boltzmann Method for Advective-Diffusive-Reactive Transport
NASA Astrophysics Data System (ADS)
Yan, Z.; Hilpert, M.
2016-12-01
The lattice Boltzmann method (LBM) has been applied to study a wide range of reactive transport in porous and fractured media. The single-relaxation-time (SRT) LBM, employing single relaxation time, is the most popular LBM due to its simplicity of understanding and implementation. Nevertheless, the SRT LBM may suffer from numerical instability for small value of the relaxation time. By contrast, the multiple-relaxation-time (MRT) LBM, employing multiple relaxation times, can improve the numerical stability through tuning the multiple relaxation times, but the complexity of implementing this method restricts its applications. The two-relaxation-time (TRT) LBM, which employs two relaxation times, combines the advantages of SRT and MRT LBMs. The TRT LBM can produce simulations with better accuracy and stability than the SRT one, and is easier to implement than the MRT one. This work evaluated the numerical accuracy and stability of the TRT method by comparing the simulation results with analytical solutions of Gaussian hill transport and Taylor dispersion under different advective velocities. The accuracy generally increased with the tunable relaxation time τ, and the stability first increased and then decreased as τ increased, showing an optimal TRT method emerging the best numerical stability. The free selection of τ enabled the TRT LBM to simulate the Gaussian hill transport and Taylor dispersion under relatively high advective velocity, under which the SRT LBM suffered from numerical instability. Finally, the TRT method was applied to study the contaminant degradation by chemotactic microorganisms in porous media, which acted as a reprehensive of reactive transport in this study, and well predicted the evolution of microorganisms and degradation of contaminants for different transport scenarios. To sum up, the TRT LBM produced simulation results with good accuracy and stability for various advective-diffusive-reactive transport through tuning the relaxation time τ, illustrating its potential to study various biogeochemical behaviors in the subsurface environment.
Experimental Method of Generating Electromagnetic Gaussian Schell-model Beams
2015-03-26
attracted special attention for the potential use in free-space optical communications, imaging through turbulence , and remote sensing applications [11...successful experiment demonstrated a reduction in scintillation of a completely unpolarized EGSM beam propagated through simulated 1 atmospheric turbulence [1...propagate through the atmosphere using either an atmospheric phase wheel or using additional SLMs to display atmospheric phase screens. Further, the source
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin
2009-01-01
We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
A novel approach to assess the treatment response using Gaussian random field in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Mengdie; Guo, Ning; Hu, Guangshu
2016-02-15
Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robustmore » approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response assessment utilizing local average method based on random field. The accuracy of the estimation was validated in terms of Euler distance and correlation coefficient. Results: It is shown that the performance of therapy response assessment is significantly improved by the introduction of variance with a higher area under the curve (97.3%) than SUVmean (91.4%) and SUVmax (82.0%). In addition, the FPR estimation serves as a good prediction for the specificity of the proposed method, consistent with simulation outcome with ∼1 correlation coefficient. Conclusions: In this work, the authors developed a method to evaluate therapy response from PET images, which were modeled as Gaussian random field. The digital phantom simulations demonstrated that the proposed method achieved a large reduction in statistical variability through incorporating knowledge of the variance of the original Gaussian random field. The proposed method has the potential to enable prediction of early treatment response and shows promise for application to clinical practice. In future work, the authors will report on the robustness of the estimation theory for application to clinical practice of therapy response evaluation, which pertains to binary discrimination tasks at a fixed location in the image such as detection of small and weak lesion.« less
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Near grazing scattering from non-Gaussian ocean surfaces
NASA Technical Reports Server (NTRS)
Kim, Yunjin; Rodriguez, Ernesto
1993-01-01
We investigate the behavior of the scattered electromagnetic waves from non-Gaussian ocean surfaces at near grazing incidence. Even though the scattering mechanisms at moderate incidence angles are relatively well understood, the same is not true for near grazing rough surface scattering. However, from the experimental ocean scattering data, it has been observed that the backscattering cross section of a horizontally polarized wave can be as large as the vertical counterpart at near grazing incidence. In addition, these returns are highly intermittent in time. There have been some suggestions that these unexpected effects may come from shadowing or feature scattering. Using numerical scattering simulations, it can be shown that the horizontal backscattering cannot be larger than the vertical one for the Gaussian surfaces. Our main objective of this study is to gain a clear understanding of scattering mechanisms underlying the near grazing ocean scattering. In order to evaluate the backscattering cross section from ocean surfaces at near grazing incidence, both the hydrodynamic modeling of ocean surfaces and an accurate near grazing scattering theory are required. For the surface modeling, we generate Gaussian surfaces from the ocean surface power spectrum which is derived using several experimental data. Then, weakly nonlinear large scale ocean surfaces are generated following Longuet-Higgins. In addition, the modulation of small waves by large waves is included using the conservation of wave action. For surface scattering, we use MOM (Method of Moments) to calculate the backscattering from scattering patches with the two scale shadowing approximation. The differences between Gaussian and non-Gaussian surface scattering at near grazing incidence are presented.
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
NASA Astrophysics Data System (ADS)
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
Miao, Yinglong; Feher, Victoria A; McCammon, J Andrew
2015-08-11
A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively.
Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation
2016-01-01
A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively. PMID:26300708
NASA Astrophysics Data System (ADS)
Zheng, Guo; Wang, Jue; Wang, Lin; Zhou, Muchun; Chen, Yanru; Song, Minmin
2018-03-01
The scintillation index of pseudo-Bessel-Gaussian Schell-mode (PBGSM) beams propagating through atmospheric turbulence is analyzed with the help of wave optics simulation due to the analytic difficulties. It is found that in the strong fluctuation regime, the PBGSM beams are more resistant to the turbulence with the appropriate parameters β and δ . However, the case is contrary in the weak fluctuation regime. Our simulation results indicate that the PBGSM beams may be applied to free-space optical (FSO) communication systems only when the turbulence is strong or the propagation distance is long.
Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic
NASA Astrophysics Data System (ADS)
Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie
2018-02-01
As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.
Fractal propagation method enables realistic optical microscopy simulations in biological tissues
Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.
2017-01-01
Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2006-01-01
The abilities of cloud-resolving models (CRMs) with the double-Gaussian based and the single-Gaussian based third-order closures (TOCs) to simulate the shallow cumuli and their transition to deep convective clouds are compared in this study. The single-Gaussian based TOC is fully prognostic (FP), while the double-Gaussian based TOC is partially prognostic (PP). The latter only predicts three important third-order moments while the former predicts all the thirdorder moments. A shallow cumulus case is simulated by single-column versions of the FP and PP TOC models. The PP TOC improves the simulation of shallow cumulus greatly over the FP TOC by producing more realistic cloud structures. Large differences between the FP and PP TOC simulations appear in the cloud layer of the second- and third-order moments, which are related mainly to the underestimate of the cloud height in the FP TOC simulation. Sensitivity experiments and analysis of probability density functions (PDFs) used in the TOCs show that both the turbulence-scale condensation and higher-order moments are important to realistic simulations of the boundary-layer shallow cumuli. A shallow to deep convective cloud transition case is also simulated by the 2-D versions of the FP and PP TOC models. Both CRMs can capture the transition from the shallow cumuli to deep convective clouds. The PP simulations produce more and deeper shallow cumuli than the FP simulations, but the FP simulations produce larger and wider convective clouds than the PP simulations. The temporal evolutions of cloud and precipitation are closely related to the turbulent transport, the cold pool and the cloud-scale circulation. The large amount of turbulent mixing associated with the shallow cumuli slows down the increase of the convective available potential energy and inhibits the early transition to deep convective clouds in the PP simulation. When the deep convective clouds fully develop and the precipitation is produced, the cold pools produced by the evaporation of the precipitation are not favorable to the formation of shallow cumuli.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
León, Larry F; Cai, Tianxi
2012-04-01
In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagesh, S Setlur; Rana, R; Russ, M
Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey
2015-04-15
Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less
NASA Astrophysics Data System (ADS)
Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki; Kawakami, Kazunori; Kikuchi, Keiichi; Miki, Hitoshi; Mochizuki, Teruhito; Ikezoe, Junpei
2001-10-01
The purpose of this study was to present an application of a novel denoising technique for improving the accuracy of cerebral blood flow (CBF) images generated from dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI). The method presented in this study was based on anisotropic diffusion (AD). The usefulness of this method was firstly investigated using computer simulations. We applied this method to patient data acquired using a 1.5 T MR system. After a bolus injection of Gd-DTPA, we obtained 40-50 dynamic images with a 1.32-2.08 s time resolution in 4-6 slices. The dynamic images were processed using the AD method, and then the CBF images were generated using pixel-by-pixel deconvolution analysis. For comparison, the CBF images were also generated with or without processing the dynamic images using a median or Gaussian filter. In simulation studies, the standard deviation of the CBF values obtained after processing by the AD method was smaller than that of the CBF values obtained without any processing, while the mean value agreed well with the true CBF value. Although the median and Gaussian filters also reduced image noise, the mean CBF values were considerably underestimated compared with the true values. Clinical studies also suggested that the AD method was capable of reducing the image noise while preserving the quantitative accuracy of CBF images. In conclusion, the AD method appears useful for denoising DSC-MRI, which will make the CBF images generated from DSC-MRI more reliable.
The Gaussian CL s method for searches of new physics
Qian, X.; Tan, A.; Ling, J. J.; ...
2016-04-23
Here we describe a method based on the CL s approach to present results in searches of new physics, under the condition that the relevant parameter space is continuous. Our method relies on a class of test statistics developed for non-nested hypotheses testing problems, denoted by ΔT, which has a Gaussian approximation to its parent distribution when the sample size is large. This leads to a simple procedure of forming exclusion sets for the parameters of interest, which we call the Gaussian CL s method. Our work provides a self-contained mathematical proof for the Gaussian CL s method, that explicitlymore » outlines the required conditions. These conditions are milder than that required by the Wilks' theorem to set confidence intervals (CIs). We illustrate the Gaussian CL s method in an example of searching for a sterile neutrino, where the CL s approach was rarely used before. We also compare data analysis results produced by the Gaussian CL s method and various CI methods to showcase their differences.« less
A method for modelling peak signal statistics on a mobile satellite transponder
NASA Technical Reports Server (NTRS)
Bilodeau, Andre; Lecours, Michel; Pelletier, Marcel; Delisle, Gilles Y.
1990-01-01
A simulation method is proposed. The simulation was developed to model the peak duration and energy content of signal peaks in a mobile communication satellite operating in a Frequency Division Multiple Access (FDMA) mode and presents an estimate of those power peaks for a system where the channels are modeled as band limited Gaussian noise, which is taken as a reasonable representation for Amplitude Commanded Single Sideband (ACSSB), Minimum Shift Keying (MSK), or Phase Shift Keying (PSK) modulated signals. The simulation results show that, under this hypothesis, the level of the signal power peaks for 10 percent, 1 percent, and 0.1 percent of the time are well described by a Rayleigh law and that their duration is extremely short and inversely proportional to the total FDM system bandwidth.
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
A method for modeling laterally asymmetric proton beamlets resulting from collimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.
2015-03-15
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEVmore » parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.« less
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
NASA Astrophysics Data System (ADS)
Nie, Yongming; Li, Xiujian; Qi, Junli; Ma, Haotong; Liao, Jiali; Yang, Jiankun; Hu, Wenhua
2012-03-01
Based on the refractive beam shaping system, the transformation of a quasi-Gaussian beam into a dark hollow Gaussian beam by a phase-only liquid crystal spatial light modulator (LC-SLM) is proposed. According to the energy conservation and constant optical path principle, the phase distribution of the aspheric lens and the phase-only LC-SLM can modulate the wave-front properly to generate the hollow beam. The numerical simulation results indicate that, the dark hollow intensity distribution of the output shaped beam can be maintained well for a certain propagation distance during which the dark region will not decrease whereas the ideal hollow Gaussian beam will do. By designing the phase modulation profile, which loaded into the LC-SLM carefully, the experimental results indicate that the dark hollow intensity distribution of the output shaped beam can be maintained well even at a distance much more than 550 mm from the LC-SLM, which agree with the numerical simulation results.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruban, V. P., E-mail: ruban@itp.ac.ru
2015-05-15
The nonlinear dynamics of an obliquely oriented wave packet on a sea surface is analyzed analytically and numerically for various initial parameters of the packet in relation to the problem of the so-called rogue waves. Within the Gaussian variational ansatz applied to the corresponding (1+2)-dimensional hyperbolic nonlinear Schrödinger equation (NLSE), a simplified Lagrangian system of differential equations is derived that describes the evolution of the coefficients of the real and imaginary quadratic forms appearing in the Gaussian. This model provides a semi-quantitative description of the process of nonlinear spatiotemporal focusing, which is one of the most probable mechanisms of roguemore » wave formation in random wave fields. The system of equations is integrated in quadratures, which allows one to better understand the qualitative differences between linear and nonlinear focusing regimes of a wave packet. Predictions of the Gaussian model are compared with the results of direct numerical simulation of fully nonlinear long-crested waves.« less
Chialvo, Ariel A.; Vlcek, Lukas
2014-11-01
We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less
Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS
2013-09-30
solves the Navier-Stokes equations under the Boussinesq approximation (Fringer et al.,2006). The formulation is based on the method outlined by...stratified systems . Figure 4 shows a nonhydrostatic isopycnal simulation of oscillatory flow in a continuously stratified fluid over a Gaussian sill. This...Modeling the Earth System , Boulder (invited). Sankaranarayanan, S., and Fringer, O. B., 2013, "Dynamics of barotropic low-frequency fluctuations in
A Gaussian method to improve work-of-breathing calculations.
Petrini, M F; Evans, J N; Wall, M A; Norman, J R
1995-01-01
The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
NASA Astrophysics Data System (ADS)
Djoko, Martin; Kofane, T. C.
2018-06-01
We investigate the propagation characteristics and stabilization of generalized-Gaussian pulse in highly nonlinear homogeneous media with higher-order dispersion terms. The optical pulse propagation has been modeled by the higher-order (3+1)-dimensional cubic-quintic-septic complex Ginzburg-Landau [(3+1)D CQS-CGL] equation. We have used the variational method to find a set of differential equations characterizing the variation of the pulse parameters in fiber optic-links. The variational equations we obtained have been integrated numerically by the means of the fourth-order Runge-Kutta (RK4) method, which also allows us to investigate the evolution of the generalized-Gaussian beam and the pulse evolution along an optical doped fiber. Then, we have solved the original nonlinear (3+1)D CQS-CGL equation with the split-step Fourier method (SSFM), and compare the results with those obtained, using the variational approach. A good agreement between analytical and numerical methods is observed. The evolution of the generalized-Gaussian beam has shown oscillatory propagation, and bell-shaped dissipative optical bullets have been obtained under certain parameter values in both anomalous and normal chromatic dispersion regimes. Using the natural control parameter of the solution as it evolves, named the total energy Q, our numerical simulations reveal the existence of 3D stable vortex dissipative light bullets, 3D stable spatiotemporal optical soliton, stationary and pulsating optical bullets, depending on the used initial input condition (symmetric or elliptic).
Brightness analysis of an electron beam with a complex profile
NASA Astrophysics Data System (ADS)
Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi
2018-05-01
We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.
Lewicki, Jennifer L.; Bergfeld, Deborah; Cardellini, Carlo; Chiodini, Giovanni; Granieri, Domenico; Varley, Nick; Werner, Cynthia A.
2005-01-01
We present a comparative study of soil CO2 flux (FCO2">FCO2) measured by five groups (Groups 1–5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1–5 measured FCO2 using the accumulation chamber method at 5-m spacing within a 900 m2 grid during a morning (AM) period. These measurements were repeated by Groups 1–3 during an afternoon (PM) period. Measured FCO2 ranged from 218 to 14,719 g m−2 day−1. The variability of the five measurements made at each grid point ranged from ±5 to 167%. However, the arithmetic means of fluxes measured over the entire grid and associated total CO2 emission rate estimates varied between groups by only ±22%. All three groups that made PM measurements reported an 8–19% increase in total emissions over the AM results. Based on a comparison of measurements made during AM and PM times, we argue that this change is due in large part to natural temporal variability of gas flow, rather than to measurement error. In order to estimate the mean and associated CO2 emission rate of one data set and to map the spatial FCO2 distribution, we compared six geostatistical methods: arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods. While the total CO2 emission rates estimated using the different techniques only varied by ±4.4%, the FCO2 maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of FCO2, but a variety of geostatistical methods are appropriate to estimate the total CO2 emission rate from a study area, which is a primary goal in volcano monitoring research.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Generation of singular optical beams from fundamental Gaussian beam using Sagnac interferometer
NASA Astrophysics Data System (ADS)
Naik, Dinesh N.; Viswanathan, Nirmal K.
2016-09-01
We propose a simple free-space optics recipe for the controlled generation of optical vortex beams with a vortex dipole or a single charge vortex, using an inherently stable Sagnac interferometer. We investigate the role played by the amplitude and phase differences in generating higher-order Gaussian beams from the fundamental Gaussian mode. Our simulation results reveal how important the control of both the amplitude and the phase difference between superposing beams is to achieving optical vortex beams. The creation of a vortex dipole from null interference is unveiled through the introduction of a lateral shear and a radial phase difference between two out-of-phase Gaussian beams. A stable and high quality optical vortex beam, equivalent to the first-order Laguerre-Gaussian beam, is synthesized by coupling lateral shear with linear phase difference, introduced orthogonal to the shear between two out-of-phase Gaussian beams.
Anomalous and non-Gaussian diffusion in Hertzian spheres
NASA Astrophysics Data System (ADS)
Ouyang, Wenze; Sun, Bin; Sun, Zhiwei; Xu, Shenghua
2018-09-01
By means of molecular dynamics simulations, we study the non-Gaussian diffusion in the fluid of Hertzian spheres. The time dependent non-Gaussian parameter, as an indicator of the dynamic heterogeneity, is increased with the increasing of temperature. When the temperature is high enough, the dynamic heterogeneity becomes very significant, and it seems counterintuitive that the maximum of non-Gaussian parameter and the position of its peak decrease monotonically with the increasing of density. By fitting the curves of self intermediate scattering function, we find that the character relaxation time τα is surprisingly not coupled with the time τmax where the non-Gaussian parameter reaches to a maximum. The intriguing features of non-Gaussian diffusion at high enough temperatures can be associated with the weakly correlated mean-field behavior of Hertzian spheres. Especially the time τmax is nearly inversely proportional to the density at extremely high temperatures.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Karacan, C Özgen; Olea, Ricardo A
2018-03-01
Chemical properties of coal largely determine coal handling, processing, beneficiation methods, and design of coal-fired power plants. Furthermore, these properties impact coal strength, coal blending during mining, as well as coal's gas content, which is important for mining safety. In order for these processes and quantitative predictions to be successful, safer, and economically feasible, it is important to determine and map chemical properties of coals accurately in order to infer these properties prior to mining. Ultimate analysis quantifies principal chemical elements in coal. These elements are C, H, N, S, O, and, depending on the basis, ash, and/or moisture. The basis for the data is determined by the condition of the sample at the time of analysis, with an "as-received" basis being the closest to sampling conditions and thus to the in-situ conditions of the coal. The parts determined or calculated as the result of ultimate analyses are compositions, reported in weight percent, and pose the challenges of statistical analyses of compositional data. The treatment of parts using proper compositional methods may be even more important in mapping them, as most mapping methods carry uncertainty due to partial sampling as well. In this work, we map the ultimate analyses parts of the Springfield coal from an Indiana section of the Illinois basin, USA, using sequential Gaussian simulation of isometric log-ratio transformed compositions. We compare the results with those of direct simulations of compositional parts. We also compare the implications of these approaches in calculating other properties using correlations to identify the differences and consequences. Although the study here is for coal, the methods described in the paper are applicable to any situation involving compositional data and its mapping.
Demers, Hendrix; Ramachandra, Ranjan; Drouin, Dominique; de Jonge, Niels
2012-01-01
Lateral profiles of the electron probe of scanning transmission electron microscopy (STEM) were simulated at different vertical positions in a micrometers-thick carbon sample. The simulations were carried out using the Monte Carlo method in the CASINO software. A model was developed to fit the probe profiles. The model consisted of the sum of a Gaussian function describing the central peak of the profile, and two exponential decay functions describing the tail of the profile. Calculations were performed to investigate the fraction of unscattered electrons as function of the vertical position of the probe in the sample. Line scans were also simulated over gold nanoparticles at the bottom of a carbon film to calculate the achievable resolution as function of the sample thickness and the number of electrons. The resolution was shown to be noise limited for film thicknesses less than 1 μm. Probe broadening limited the resolution for thicker films. The validity of the simulation method was verified by comparing simulated data with experimental data. The simulation method can be used as quantitative method to predict STEM performance or to interpret STEM images of thick specimens. PMID:22564444
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Electronically nonadiabatic wave packet propagation using frozen Gaussian scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondorskiy, Alexey D., E-mail: kondor@sci.lebedev.ru; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp
2015-09-21
We present an approach, which allows to employ the adiabatic wave packet propagation technique and semiclassical theory to treat the nonadiabatic processes by using trajectory hopping. The approach developed generates a bunch of hopping trajectories and gives all additional information to incorporate the effect of nonadiabatic coupling into the wave packet dynamics. This provides an interface between a general adiabatic frozen Gaussian wave packet propagation method and the trajectory surface hopping technique. The basic idea suggested in [A. D. Kondorskiy and H. Nakamura, J. Chem. Phys. 120, 8937 (2004)] is revisited and complemented in the present work by the elaborationmore » of efficient numerical algorithms. We combine our approach with the adiabatic Herman-Kluk frozen Gaussian approximation. The efficiency and accuracy of the resulting method is demonstrated by applying it to popular benchmark model systems including three Tully’s models and 24D model of pyrazine. It is shown that photoabsorption spectrum is successfully reproduced by using a few hundreds of trajectories. We employ the compact finite difference Hessian update scheme to consider feasibility of the ab initio “on-the-fly” simulations. It is found that this technique allows us to obtain the reliable final results using several Hessian matrix calculations per trajectory.« less
Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.
Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R
2013-11-01
A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.
NASA Astrophysics Data System (ADS)
Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun
2011-09-01
We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.
Charge fluctuations in nanoscale capacitors.
Limmer, David T; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A; van Roij, René; Rotenberg, Benjamin
2013-09-06
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
Charge Fluctuations in Nanoscale Capacitors
NASA Astrophysics Data System (ADS)
Limmer, David T.; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A.; van Roij, René; Rotenberg, Benjamin
2013-09-01
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Nicolas, F; Coëtmellec, S; Brunel, M; Allano, D; Lebrun, D; Janssen, A J E M
2005-11-01
The authors have studied the diffraction pattern produced by a particle field illuminated by an elliptic and astigmatic Gaussian beam. They demonstrate that the bidimensional fractional Fourier transformation is a mathematically suitable tool to analyse the diffraction pattern generated not only by a collimated plane wave [J. Opt. Soc. Am A 19, 1537 (2002)], but also by an elliptic and astigmatic Gaussian beam when two different fractional orders are considered. Simulations and experimental results are presented.
Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi
2012-06-01
A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.
NASA Astrophysics Data System (ADS)
Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula
2018-01-01
The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.
Orthogonal Gaussian process models
Plumlee, Matthew; Joseph, V. Roshan
2017-01-01
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Orthogonal Gaussian process models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plumlee, Matthew; Joseph, V. Roshan
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Continuous-variable entanglement distillation of non-Gaussian mixed states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Ruifang; Lassen, Mikael; Department of Physics, Technical University of Denmark, Building 309, DK-2800 Lyngby
2010-07-15
Many different quantum-information communication protocols such as teleportation, dense coding, and entanglement-based quantum key distribution are based on the faithful transmission of entanglement between distant location in an optical network. The distribution of entanglement in such a network is, however, hampered by loss and noise that is inherent in all practical quantum channels. Thus, to enable faithful transmission one must resort to the protocol of entanglement distillation. In this paper we present a detailed theoretical analysis and an experimental realization of continuous variable entanglement distillation in a channel that is inflicted by different kinds of non-Gaussian noise. The continuous variablemore » entangled states are generated by exploiting the third order nonlinearity in optical fibers, and the states are sent through a free-space laboratory channel in which the losses are altered to simulate a free-space atmospheric channel with varying losses. We use linear optical components, homodyne measurements, and classical communication to distill the entanglement, and we find that by using this method the entanglement can be probabilistically increased for some specific non-Gaussian noise channels.« less
NASA Astrophysics Data System (ADS)
Zhao, Yan; Li, DongXu; Liu, ZhiZhen; Liu, Liang
2013-03-01
The dexterous upper limb serves as the most important tool for astronauts to implement in-orbit experiments and operations. This study developed a simulated weightlessness experiment and invented new measuring equipment to quantitatively evaluate the muscle ability of the upper limb. Isometric maximum voluntary contractions (MVCs) and surface electromyography (sEMG) signals of right-handed pushing at the three positions were measured for eleven subjects. In order to enhance the comprehensiveness and accuracy of muscle force assessment, the study focused on signal processing techniques. We applied a combination method, which consists of time-, frequency-, and bi-frequency-domain analyses. Time- and frequency-domain analyses estimated the root mean square (RMS) and median frequency (MDF) of sEMG signals, respectively. Higher order spectra (HOS) of bi-frequency domain evaluated the maximum bispectrum amplitude ( B max), Gaussianity level (Sg) and linearity level (S l ) of sEMG signals. Results showed that B max, S l , and RMS values all increased as force increased. MDF and Sg values both declined as force increased. The research demonstrated that the combination method is superior to the conventional time- and frequency-domain analyses. The method not only described sEMG signal amplitude and power spectrum, but also deeper characterized phase coupling information and non-Gaussianity and non-linearity levels of sEMG, compared to two conventional analyses. The finding from the study can aid ergonomist to estimate astronaut muscle performance, so as to optimize in-orbit operation efficacy and minimize musculoskeletal injuries.
NASA Astrophysics Data System (ADS)
Ma, Qian; Kang, Dongdong; Zhao, Zengxiu; Dai, Jiayu
2018-01-01
Electrical conductivity of hot dense hydrogen is directly calculated by molecular dynamics simulation with a reduced electron force field method, in which the electrons are represented as Gaussian wave packets with fixed sizes. Here, the temperature is higher than electron Fermi temperature ( T > 300 eV , ρ = 40 g / cc ). The present method can avoid the Coulomb catastrophe and give the limit of electrical conductivity based on the Coulomb interaction. We investigate the effect of ion-electron coupled movements, which is lost in the static method such as density functional theory based Kubo-Greenwood framework. It is found that the ionic dynamics, which contributes to the dynamical electrical microfield and electron-ion collisions, will reduce the conductivity significantly compared with the fixed ion configuration calculations.
NASA Astrophysics Data System (ADS)
Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro
2014-05-01
This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if kh<1.36. In this regard, the aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.
NASA Astrophysics Data System (ADS)
Jones, Andrew P.; Crain, Jason; Sokhan, Vlad P.; Whitfield, Troy W.; Martyna, Glenn J.
2013-04-01
Treating both many-body polarization and dispersion interactions is now recognized as a key element in achieving the level of atomistic modeling required to reveal novel physics in complex systems. The quantum Drude oscillator (QDO), a Gaussian-based, coarse grained electronic structure model, captures both many-body polarization and dispersion and has linear scale computational complexity with system size, hence it is a leading candidate next-generation simulation method. Here, we investigate the extent to which the QDO treatment reproduces the desired long-range atomic and molecular properties. We present closed form expressions for leading order polarizabilities and dispersion coefficients and derive invariant (parameter-free) scaling relationships among multipole polarizability and many-body dispersion coefficients that arise due to the Gaussian nature of the model. We show that these “combining rules” hold to within a few percent for noble gas atoms, alkali metals, and simple (first-row hydride) molecules such as water; this is consistent with the surprising success that models with underlying Gaussian statistics often exhibit in physics. We present a diagrammatic Jastrow-type perturbation theory tailored to the QDO model that serves to illustrate the rich types of responses that the QDO approach engenders. QDO models for neon, argon, krypton, and xenon, designed to reproduce gas phase properties, are constructed and their condensed phase properties explored via linear scale diffusion Monte Carlo (DMC) and path integral molecular dynamics (PIMD) simulations. Good agreement with experimental data for structure, cohesive energy, and bulk modulus is found, demonstrating a degree of transferability that cannot be achieved using current empirical models or fully ab initio descriptions.
Tracer diffusion in a sea of polymers with binding zones: mobile vs. frozen traps.
Samanta, Nairhita; Chakrabarti, Rajarshi
2016-10-19
We use molecular dynamics simulations to investigate the tracer diffusion in a sea of polymers with specific binding zones for the tracer. These binding zones act as traps. Our simulations show that the tracer can undergo normal yet non-Gaussian diffusion under certain circumstances, e.g., when the polymers with traps are frozen in space and the volume fraction and the binding strength of the traps are moderate. In this case, as the tracer moves, it experiences a heterogeneous environment and exhibits confined continuous time random walk (CTRW) like motion resulting in a non-Gaussian behavior. Also the long time dynamics becomes subdiffusive as the number or the binding strength of the traps increases. However, if the polymers are mobile then the tracer dynamics is Gaussian but could be normal or subdiffusive depending on the number and the binding strength of the traps. In addition, with increasing binding strength and number of polymer traps, the probability of the tracer being trapped increases. On the other hand, removing the binding zones does not result in trapping, even at comparatively high crowding. Our simulations also show that the trapping probability increases with the increasing size of the tracer and for a bigger tracer with the frozen polymer background the dynamics is only weakly non-Gaussian but highly subdiffusive. Our observations are in the same spirit as found in many recent experiments on tracer diffusion in polymeric materials and question the validity of using Gaussian theory to describe diffusion in a crowded environment in general.
NASA Astrophysics Data System (ADS)
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-07-01
The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.
Lin, Zhili; Chen, Xudong; Ding, Panfeng; Qiu, Weibin; Pu, Jixiong
2017-04-03
The ponderomotive interaction of high-power laser beams with collisional plasma is modeled in the nonrelativistic regime and is simulated using the powerful finite-difference time-domain (FDTD) method for the first time in literature. The nonlinear and dissipative dielectric constant function of the collisional plasma is deduced that takes the ponderomotive effect into account and is implemented in the discrete framework of FDTD algorithms. Maclaurin series expansion approach is applied for implementing the obtained physical model and the time average of the square of light field is extracted by numerically evaluating an integral identity based on the composite trapezoidal rule for numerical integration. Two numerical examples corresponding to two different types of laser beams, Gaussian beam and vortex Laguerre-Gaussian beam, propagating in collisional plasma, are presented for specified laser and plasma parameters to verify the validity of the proposed FDTD-based approach. Simulation results show the anticipated self-focusing and attenuation phenomena of laser beams and the deformation of the spatial density distributions of electron plasma along the beam propagation path. Due to the flexibility of FDTD method in light beam excitation and accurate complex material modeling, the proposed approach has a wide application prospect in the study of the complex laser-plasma interactions in a small scale.
Novel theory for propagation of tilted Gaussian beam through aligned optical system
NASA Astrophysics Data System (ADS)
Xia, Lei; Gao, Yunguo; Han, Xudong
2017-03-01
A novel theory for tilted beam propagation is established in this paper. By setting the propagation direction of the tilted beam as the new optical axis, we establish a virtual optical system that is aligned with the new optical axis. Within the first order approximation of the tilt and off-axis, the propagation of the tilted beam is studied in the virtual system instead of the actual system. To achieve more accurate optical field distributions of tilted Gaussian beams, a complete diffraction integral for a misaligned optical system is derived by using the matrix theory with angular momentums. The theory demonstrates that a tilted TEM00 Gaussian beam passing through an aligned optical element transforms into a decentered Gaussian beam along the propagation direction. The deviations between the peak intensity axis of the decentered Gaussian beam and the new optical axis have linear relationships with the misalignments in the virtual system. ZEMAX simulation of a tilted beam through a thick lens exposed to air shows that the errors between the simulation results and theoretical calculations of the position deviations are less than 2‰ when the misalignments εx, εy, εx', εy' are in the range of [-0.5, 0.5] mm and [-0.5, 0.5]°.
Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D.
2017-01-01
Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices. PMID:28848417
Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D
2017-01-01
Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering
NASA Astrophysics Data System (ADS)
Samsonov, Alexei A.; Johnson, Chris R.
2004-05-01
MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.
Generally astigmatic Gaussian beam representation and optimization using skew rays
NASA Astrophysics Data System (ADS)
Colbourne, Paul D.
2014-12-01
Methods are presented of using skew rays to optimize a generally astigmatic optical system to obtain the desired Gaussian beam focus and minimize aberrations, and to calculate the propagating generally astigmatic Gaussian beam parameters at any point. The optimization method requires very little computation beyond that of a conventional ray optimization, and requires no explicit calculation of the properties of the propagating Gaussian beam. Unlike previous methods, the calculation of beam parameters does not require matrix calculations or the introduction of non-physical concepts such as imaginary rays.
Dynamical heterogeneities of cold 2D Yukawa liquids
NASA Astrophysics Data System (ADS)
Wang, Kang; Huang, Dong; Feng, Yan
2018-06-01
Dynamical heterogeneities of 2D liquid dusty plasmas at different temperatures are investigated systematically using Langevin dynamical simulations. From the simulated trajectories, various heterogeneity measures have been calculated, such as the distance matrix, the averaged squared displacement, the non-Gaussian parameter, and the four-point susceptibility. It is found that, for 2D Yukawa liquids, both spatial and temporal heterogeneities in dynamics are more severe at a lower temperature near the melting point. For various temperatures, the calculated non-Gaussian parameter of 2D Yukawa liquids contains two peaks at different times, indicating the most heterogeneous dynamics, which are attributed to the transition of different motions and the α relaxation time, respectively. In the diffusive motion, the most heterogeneous dynamics for a colder Yukawa liquid happen more slowly, as indicated by both the non-Gaussian parameter and the four-point susceptibility.
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Taruya, Atsushi; Koyama, Kazuya; Sabiu, Cristiano
2010-07-01
We study the halo bispectrum from non-Gaussian initial conditions. Based on a set of large N-body simulations starting from initial density fields with local type non-Gaussianity, we find that the halo bispectrum exhibits a strong dependence on the shape and scale of Fourier space triangles near squeezed configurations at large scales. The amplitude of the halo bispectrum roughly scales as fNL2. The resultant scaling on the triangular shape is consistent with that predicted by Jeong & Komatsu based on perturbation theory. We systematically investigate this dependence with varying redshifts and halo mass thresholds. It is shown that the fNL dependence of the halo bispectrum is stronger for more massive haloes at higher redshifts. This feature can be a useful discriminator of inflation scenarios in future deep and wide galaxy redshift surveys.
Yang, Chunyong; Xu, Chuang; Ni, Wenjun; Gan, Yu; Hou, Jin; Chen, Shaoping
2017-10-16
A novel scheme is proposed to mitigate the atmospheric turbulence effect in free space optical (FSO) communication employing orbital angular momentum (OAM) multiplexing. In this scheme, the Gaussian beam is used as an auxiliary light with a common-path to obtain the distortion information caused by atmospheric turbulence. After turbulence, the heterodyne coherent detection technology is demonstrated to realize the turbulence mitigation. With the same turbulence distortion, the OAM beams and the Gaussian beam are respectively utilized as the signal light and the local oscillation light. Then the turbulence distortion is counteracted to a large extent. Meanwhile, a phase matching method is proposed to select the specific OAM mode. The discrimination between the neighboring OAM modes is obviously improved by detecting the output photocurrent. Moreover, two methods of beam size adjustment have been analyzed to achieve better performance for turbulence mitigation. Numerical results show that the system bit error rate (BER) can reach 10 -5 under strong turbulence in simulation situation.
Quantifying parameter uncertainty in stochastic models using the Box Cox transformation
NASA Astrophysics Data System (ADS)
Thyer, Mark; Kuczera, George; Wang, Q. J.
2002-08-01
The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.
NASA Astrophysics Data System (ADS)
Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.
1995-05-01
Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
High-order space charge effects using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996
1997-02-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
Topology of microwave background fluctuations - Theory
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Park, Changbom; Bies, William E.; Bennett, David P.; Juszkiewicz, Roman
1990-01-01
Topological measures are used to characterize the microwave background temperature fluctuations produced by 'standard' scenarios (Gaussian) and by cosmic strings (non-Gaussian). Three topological quantities: total area of the excursion regions, total length, and total curvature (genus) of the isotemperature contours, are studied for simulated Gaussian microwave background anisotropy maps and then compared with those of the non-Gaussian anisotropy pattern produced by cosmic strings. In general, the temperature gradient field shows the non-Gaussian behavior of the string map more distinctively than the temperature field for all topology measures. The total contour length and the genus are found to be more sensitive to the existence of a stringy pattern than the usual temperature histogram. Situations when instrumental noise is superposed on the map, are considered to find the critical signal-to-noise ratio for which strings can be detected.
NASA Astrophysics Data System (ADS)
Mu, Hongqian; Wang, Muguang; Tang, Yu; Zhang, Jing; Jian, Shuisheng
2018-03-01
A novel scheme for the generation of FCC-compliant UWB pulse is proposed based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion. The modified Gaussian quadruplet is synthesized based on linear sum of a broad Gaussian pulse and two narrow Gaussian pulses with the same pulse-width and amplitude peak. Within specific parameter range, FCC-compliant UWB with spectral power efficiency of higher than 39.9% can be achieved. In order to realize the designed waveform, a UWB generator based on spectral shaping and incoherent wavelength-to-time mapping is proposed. The spectral shaper is composed of a Gaussian filter and a programmable filter. Single-mode fiber functions as both dispersion device and transmission medium. Balanced photodetection is employed to combine linearly the broad Gaussian pulse and two narrow Gaussian pulses, and at same time to suppress pulse pedestals that result in low-frequency components. The proposed UWB generator can be reconfigured for UWB doublet by operating the programmable filter as a single-band Gaussian filter. The feasibility of proposed UWB generator is demonstrated experimentally. Measured UWB pulses match well with simulation results. FCC-compliant quadruplet with 10-dB bandwidth of 6.88-GHz, fractional bandwidth of 106.8% and power efficiency of 51% is achieved.
On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.
2011-01-01
This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Permutation entropy of fractional Brownian motion and fractional Gaussian noise
NASA Astrophysics Data System (ADS)
Zunino, L.; Pérez, D. G.; Martín, M. T.; Garavaglia, M.; Plastino, A.; Rosso, O. A.
2008-06-01
We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Analysis of piezoelectric energy harvester under modulated and filtered white Gaussian noise
NASA Astrophysics Data System (ADS)
Quaranta, Giuseppe; Trentadue, Francesco; Maruccio, Claudio; Marano, Giuseppe C.
2018-05-01
This paper proposes a comprehensive method for the electromechanical probabilistic analysis of piezoelectric energy harvesters subjected to modulated and filtered white Gaussian noise (WGN) at the base. Specifically, the dynamic excitation is simulated by means of an amplitude-modulated WGN, which is filtered through the Clough-Penzien filter. The considered piezoelectric harvester is a cantilever bimorph modeled as Euler-Bernoulli beam with a concentrated mass at the free-end, and its global behavior is approximated by the fundamental vibration mode (which is tuned with the dominant frequency of the dynamic input). A resistive electrical load is considered in the circuit. Once the Lyapunov equation of the coupled electromechanical problem has been formulated, an original and efficient semi-analytical procedure is proposed to estimate mean and standard deviation of the electrical energy extracted from the piezoelectric layers.
Multifractal Properties of Process Control Variables
NASA Astrophysics Data System (ADS)
Domański, Paweł D.
2017-06-01
Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.
Topology in two dimensions. II - The Abell and ACO cluster catalogues
NASA Astrophysics Data System (ADS)
Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter
1992-09-01
We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-08
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
A staggered-grid convolutional differentiator for elastic wave modelling
NASA Astrophysics Data System (ADS)
Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun
2015-11-01
The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
Investigation on filter method for smoothing spiral phase plate
NASA Astrophysics Data System (ADS)
Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian
2018-03-01
Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.
Chimera states in Gaussian coupled map lattices
NASA Astrophysics Data System (ADS)
Li, Xiao-Wen; Bi, Ran; Sun, Yue-Xiang; Zhang, Shuo; Song, Qian-Qian
2018-04-01
We study chimera states in one-dimensional and two-dimensional Gaussian coupled map lattices through simulations and experiments. Similar to the case of global coupling oscillators, individual lattices can be regarded as being controlled by a common mean field. A space-dependent order parameter is derived from a self-consistency condition in order to represent the collective state.
Gas kinematics in FIRE simulated galaxies compared to spatially unresolved H I observations
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Bradford, Jeremy; Quataert, Eliot; Geha, Marla; Boylan-Kolchin, Michael; Weisz, Daniel R.; Wetzel, Andrew; Hopkins, Philip F.; Chan, T. K.; Fitts, Alex; Kereš, Dušan; Faucher-Giguère, Claude-André
2018-06-01
The shape of a galaxy's spatially unresolved, globally integrated 21-cm emission line depends on its internal gas kinematics: galaxies with rotationally supported gas discs produce double-horned profiles with steep wings, while galaxies with dispersion-supported gas produce Gaussian-like profiles with sloped wings. Using mock observations of simulated galaxies from the FIRE project, we show that one can therefore constrain a galaxy's gas kinematics from its unresolved 21-cm line profile. In particular, we find that the kurtosis of the 21-cm line increases with decreasing V/σ and that this trend is robust across a wide range of masses, signal-to-noise ratios, and inclinations. We then quantify the shapes of 21-cm line profiles from a morphologically unbiased sample of ˜2000 low-redshift, H I-detected galaxies with Mstar = 107-11 M⊙ and compare to the simulated galaxies. At Mstar ≳ 1010 M⊙, both the observed and simulated galaxies produce double-horned profiles with low kurtosis and steep wings, consistent with rotationally supported discs. Both the observed and simulated line profiles become more Gaussian like (higher kurtosis and less-steep wings) at lower masses, indicating increased dispersion support. However, the simulated galaxies transition from rotational to dispersion support more strongly: at Mstar = 108-10 M⊙, most of the simulations produce more Gaussian-like profiles than typical observed galaxies with similar mass, indicating that gas in the low-mass simulated galaxies is, on average, overly dispersion supported. Most of the lower-mass-simulated galaxies also have somewhat lower gas fractions than the median of the observed population. The simulations nevertheless reproduce the observed line-width baryonic Tully-Fisher relation, which is insensitive to rotational versus dispersion support.
Wang, Haibin; Zha, Daifeng; Li, Peng; Xie, Huicheng; Mao, Lili
2017-01-01
Stockwell transform(ST) time-frequency representation(ST-TFR) is a time frequency analysis method which combines short time Fourier transform with wavelet transform, and ST time frequency filtering(ST-TFF) method which takes advantage of time-frequency localized spectra can separate the signals from Gaussian noise. The ST-TFR and ST-TFF methods are used to analyze the fault signals, which is reasonable and effective in general Gaussian noise cases. However, it is proved that the mechanical bearing fault signal belongs to Alpha(α) stable distribution process(1 < α < 2) in this paper, even the noise also is α stable distribution in some special cases. The performance of ST-TFR method will degrade under α stable distribution noise environment, following the ST-TFF method fail. Hence, a new fractional lower order ST time frequency representation(FLOST-TFR) method employing fractional lower order moment and ST and inverse FLOST(IFLOST) are proposed in this paper. A new FLOST time frequency filtering(FLOST-TFF) algorithm based on FLOST-TFR method and IFLOST is also proposed, whose simplified method is presented in this paper. The discrete implementation of FLOST-TFF algorithm is deduced, and relevant steps are summarized. Simulation results demonstrate that FLOST-TFR algorithm is obviously better than the existing ST-TFR algorithm under α stable distribution noise, which can work better under Gaussian noise environment, and is robust. The FLOST-TFF method can effectively filter out α stable distribution noise, and restore the original signal. The performance of FLOST-TFF algorithm is better than the ST-TFF method, employing which mixed MSEs are smaller when α and generalized signal noise ratio(GSNR) change. Finally, the FLOST-TFR and FLOST-TFF methods are applied to analyze the outer race fault signal and extract their fault features under α stable distribution noise, where excellent performances can be shown. PMID:28406916
Transfer of non-Gaussian quantum states of mechanical oscillator to light
NASA Astrophysics Data System (ADS)
Filip, Radim; Rakhubovsky, Andrey A.
2015-11-01
Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
See Change: Classifying single observation transients from HST using SNCosmo
NASA Astrophysics Data System (ADS)
Sofiatti Nunes, Caroline; Perlmutter, Saul; Nordin, Jakob; Rubin, David; Lidman, Chris; Deustua, Susana E.; Fruchter, Andrew S.; Aldering, Greg Scott; Brodwin, Mark; Cunha, Carlos E.; Eisenhardt, Peter R.; Gonzalez, Anthony H.; Jee, Myungkook J.; Hildebrandt, Hendrik; Hoekstra, Henk; Santos, Joana; Stanford, S. Adam; Stern, Dana R.; Fassbender, Rene; Richard, Johan; Rosati, Piero; Wechsler, Risa H.; Muzzin, Adam; Willis, Jon; Boehringer, Hans; Gladders, Michael; Goobar, Ariel; Amanullah, Rahman; Hook, Isobel; Huterer, Dragan; Huang, Jiasheng; Kim, Alex G.; Kowalski, Marek; Linder, Eric; Pain, Reynald; Saunders, Clare; Suzuki, Nao; Barbary, Kyle H.; Rykoff, Eli S.; Meyers, Joshua; Spadafora, Anthony L.; Hayden, Brian; Wilson, Gillian; Rozo, Eduardo; Hilton, Matt; Dixon, Samantha; Yen, Mike
2016-01-01
The Supernova Cosmology Project (SCP) is executing "See Change", a large HST program to look for possible variation in dark energy using supernovae at z>1. As part of the survey, we often must make time-critical follow-up decisions based on multicolor detection at a single epoch. We demonstrate the use of the SNCosmo software package to obtain simulated fluxes in the HST filters for type Ia and core-collapse supernovae at various redshifts. These simulations allow us to compare photometric data from HST with the distribution of the simulated SNe through methods such as Random Forest, a learning method for classification, and Gaussian Kernel Estimation. The results help us make informed decisions about triggered follow up using HST and ground based observatories to provide time-critical information needed about transients. Examples of this technique applied in the context of See Change are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung Shingyu, E-mail: masyleung@ust.h; Qian Jianliang, E-mail: qian@math.msu.ed
2010-11-20
We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schroedinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in . In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying themore » FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.« less
The backward phase flow and FBI-transform-based Eulerian Gaussian beams for the Schrödinger equation
NASA Astrophysics Data System (ADS)
Leung, Shingyu; Qian, Jianliang
2010-11-01
We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schrödinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in [12]. In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying the FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
NASA Astrophysics Data System (ADS)
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Yanfei
2018-04-01
We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1997-04-25
A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.
Calculating Launch Vehicle Flight Performance Reserve
NASA Technical Reports Server (NTRS)
Hanson, John M.; Pinson, Robin M.; Beard, Bernard B.
2011-01-01
This paper addresses different methods for determining the amount of extra propellant (flight performance reserve or FPR) that is necessary to reach orbit with a high probability of success. One approach involves assuming that the various influential parameters are independent and that the result behaves as a Gaussian. Alternatively, probabilistic models may be used to determine the vehicle and environmental models that will be available (estimated) for a launch day go/no go decision. High-fidelity closed-loop Monte Carlo simulation determines the amount of propellant used with each random combination of parameters that are still unknown at the time of launch. Using the results of the Monte Carlo simulation, several methods were used to calculate the FPR. The final chosen solution involves determining distributions for the pertinent outputs and running a separate Monte Carlo simulation to obtain a best estimate of the required FPR. This result differs from the result obtained using the other methods sufficiently that the higher fidelity is warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
NASA Astrophysics Data System (ADS)
Marie-Magdeleine, A.; Fortes-Patella, R.; Lemoine, N.; Marchand, N.
2012-11-01
This study concerns the simulation of the implementation of the Kinetic Differential Pressure (KDP) method used for the unsteady mass flow rate evaluation in order to identify the dynamic transfer matrix of a cavitatingVenturi. Firstly, the equations of the IZ code used for this simulation are introduced. Next, the methodology for evaluating unsteady pressures and mass flow rates at the inlet and the outlet of the cavitatingVenturi and for identifying the dynamic transfer matrix is presented. Later, the robustness of the method towards measurement uncertainties implemented as a Gaussian white noise is studied. The results of the numerical simulations let us estimate the system linearity domain and to perform the Empirical Transfer Function Evaluation on the inlet frequency per frequency signal and on the chirp signal tests. Then the pressure data obtained with the KDP method is taken and the identification procedure by ETFE and by the user-made Auto-Recursive Moving-Average eXogenous algorithms is performed and the obtained transfer matrix coefficients are compared with those obtained from the simulated input and output data.
NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.
Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
Brownian systems with spatially inhomogeneous activity
NASA Astrophysics Data System (ADS)
Sharma, A.; Brader, J. M.
2017-09-01
We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Neural pulse frequency modulation of an exponentially correlated Gaussian process
NASA Technical Reports Server (NTRS)
Hutchinson, C. E.; Chon, Y.-T.
1976-01-01
The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Approximate Bayesian computation for forward modeling in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akeret, Joël; Refregier, Alexandre; Amara, Adam
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to themore » posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.« less
NASA Astrophysics Data System (ADS)
Griffiths, K. R.; Hicks, B. J.; Keogh, P. S.; Shires, D.
2016-08-01
In general, vehicle vibration is non-stationary and has a non-Gaussian probability distribution; yet existing testing methods for packaging design employ Gaussian distributions to represent vibration induced by road profiles. This frequently results in over-testing and/or over-design of the packaging to meet a specification and correspondingly leads to wasteful packaging and product waste, which represent 15bn per year in the USA and €3bn per year in the EU. The purpose of the paper is to enable a measured non-stationary acceleration signal to be replaced by a constructed signal that includes as far as possible any non-stationary characteristics from the original signal. The constructed signal consists of a concatenation of decomposed shorter duration signals, each having its own kurtosis level. Wavelet analysis is used for the decomposition process into inner and outlier signal components. The constructed signal has a similar PSD to the original signal, without incurring excessive acceleration levels. This allows an improved and more representative simulated input signal to be generated that can be used on the current generation of shaker tables. The wavelet decomposition method is also demonstrated experimentally through two correlation studies. It is shown that significant improvements over current international standards for packaging testing are achievable; hence the potential for more efficient packaging system design is possible.
NASA Astrophysics Data System (ADS)
Spinlove, K. E.; Vacher, M.; Bearpark, M.; Robb, M. A.; Worth, G. A.
2017-01-01
Recent work, particularly by Cederbaum and co-workers, has identified the phenomenon of charge migration, whereby charge flow occurs over a static molecular framework after the creation of an electronic wavepacket. In a real molecule, this charge migration competes with charge transfer, whereby the nuclear motion also results in the re-distribution of charge. To study this competition, quantum dynamics simulations need to be performed. To break the exponential scaling of standard grid-based algorithms, approximate methods need to be developed that are efficient yet able to follow the coupled electronic-nuclear motion of these systems. Using a simple model Hamiltonian based on the ionisation of the allene molecule, the performance of different methods based on Gaussian Wavepackets is demonstrated.
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dayanga, Waduthanthree Thilina
Albert Einstein's general theory of relativity predicts the existence of gravitational waves (GWs). Direct detection of GWs will provide enormous amount of new information about physics, astronomy and cosmology. Scientists around the world are currently working towards the first direct detection of GWs. The global network of ground-based GW detectors are currently preparing for their first advanced detector Science runs. In this thesis we focus on detection of GWs from compact binary coalescence (CBC) systems. Ability to accurately model CBC GW waveforms makes them the most promising source for the first direct detection of GWs. In this thesis we try to address several challenges associated with detecting CBC signals buried in ground-based GW detector data for past and future searches. Data analysis techniques we employ to detect GW signals assume detector noise is Gaussian and stationary. However, in reality, detector data is neither Gaussian nor stationary. To estimate the performance loss due to these features, we compare the efficiencies of detecting CBC signals in simulated Gaussian and real data. Additionally, we also demonstrate the effectiveness of multi-detector signal based consistency tests such ad null-stream. Despite, non-Gaussian and non-stationary features of real detector data, with effective data quality studies and signal-based vetoes we can approach the performance of Gaussian and stationary data. As we are moving towards advanced detector era, it is important to be prepared for future CBC searches. In this thesis we investigate the performances of non-spinning binary black hole (BBH) searches in simulated Gaussian using advanced detector noise curves predicted for 2015--2016. In the same study, we analyze the GW detection probabilities of latest pN-NR hybrid waveforms submitted to second version of Numerical Injection Analysis (NINJA-2) project. The main motivation for this study is to understand the ability to detect realistic BBH signals of currently available template waveforms in LIGO Algorithms Libraries (LAL) such as EOBNR waveform family. Results of the analysis demonstrates, although the detection efficiency is least affected, parameter estimation can be challenging in future searches. Many authors suggested and demonstrated coherent searches are the most sensitive in detecting GW signals using network of multiple detectors. Owing to computational expenses in recent Science data searches of LIGO and Virgo we did not employ coherent search methods. In this thesis we demonstrate how to employ coherent searches for current CBC searches in computational feasible way. As a solution, we thoroughly investigate many aspects of coherent searches using a all-sky blind hierarchical coherent pipeline. Most importantly we presents some powerful insights extracted by running coherent hierarchical pipeline on LIGO and Virgo data. This also includes the challenges we need to address before moving to all-sky all-time fully coherent searches. Estimating GW background play critical role in data analysis. We are still exploring the best way to estimate background of a CBC GW search when one or more signal present in data. In this thesis we try to address this to certain extend through NINJA-2 mock data challenge. However, due to limitations of methods and computer power, for triple coincident GW candidates we only consider loudest two interferometers for background estimation purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagne, MC; Archambault, L; CHU de Quebec, Quebec, Quebec
2014-06-15
Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometricmore » parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.« less
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Sunyit Visiting Faculty Research
2012-01-01
deblurring with Gaussian and impulse noise . Improvements in both PSNR and visual quality of IFASDA over a typical existing method are demonstrated...blurring Images Corrupted by Mixed Impulse plus Gaussian Noise / Department of Mathematics Syracuse University This work studies a problem of image...restoration that observed images are contaminated by Gaussian and impulse noise . Existing methods in the literature are based on minimizing an objective
NASA Astrophysics Data System (ADS)
Liu, Huilong; Lü, Yanfei; Zhang, Jing; Xia, Jing; Pu, Xiaoyun; Dong, Yuan; Li, Shutao; Fu, Xihong; Zhang, Angfeng; Wang, Changjia; Tan, Yong; Zhang, Xihe
2015-01-01
This paper studies the propagation properties of controllable hollow flat-topped beams (CHFBs) in turbulent atmosphere based on ABCD matrix, sets up a propagation model and obtains an analytical expression for the propagation. With the help of numerical simulation, the propagation properties of CHFBs in different parameters are studied. Results indicate that in turbulent atmosphere, with the increase of propagation distance, the darkness of CHFBs gradually annihilate, and eventually evolve into Gaussian beams. Compared with the propagation properties in free space, the turbulent atmosphere enhances the diffraction effect of CHFBs and reduces the propagation distance for CHFBs to evolve into Gaussian beams. In strong turbulence atmospheric propagation, Airy disk phenomenon will disappear. The study on the propagation properties of CHFBs in turbulence atmosphere by using ABCD matrix is simple and convenient. This method can also be applied to study the propagation properties of other hollow laser beams in turbulent atmosphere.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu
2014-01-01
This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Hui
2018-05-01
Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.
HARMONIC IN-PAINTING OF COSMIC MICROWAVE BACKGROUND SKY BY CONSTRAINED GAUSSIAN REALIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jaiseung; Naselsky, Pavel; Mandolesi, Nazzareno, E-mail: jkim@nbi.dk
The presence of astrophysical emissions between the last scattering surface and our vantage point requires us to apply a foreground mask on cosmic microwave background (CMB) sky maps, leading to large cuts around the Galactic equator and numerous holes. Since many CMB analysis, in particular on the largest angular scales, may be performed on a whole-sky map in a more straightforward and reliable manner, it is of utmost importance to develop an efficient method to fill in the masked pixels in a way compliant with the expected statistical properties and the unmasked pixels. In this Letter, we consider the Montemore » Carlo simulation of a constrained Gaussian field and derive it CMB anisotropy in harmonic space, where a feasible implementation is possible with good approximation. We applied our method to simulated data, which shows that our method produces a plausible whole-sky map, given the unmasked pixels, and a theoretical expectation. Subsequently, we applied our method to the Wilkinson Microwave Anisotropy Probe foreground-reduced maps and investigated the anomalous alignment between quadrupole and octupole components. From our investigation, we find that the alignment in the foreground-reduced maps is even higher than the Internal Linear Combination map. We also find that the V-band map has higher alignment than other bands, despite the expectation that the V-band map has less foreground contamination than other bands. Therefore, we find it hard to attribute the alignment to residual foregrounds. Our method will be complementary to other efforts on in-painting or reconstructing the masked CMB data, and of great use to Planck surveyor and future missions.« less
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Hsi, W; Zhao, J
2016-06-15
Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less
2014-01-01
Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463
Sassani, Farrokh
2014-01-01
The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Gaussian processes: a method for automatic QSAR modeling of ADME properties.
Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D
2007-01-01
In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.
NASA Astrophysics Data System (ADS)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
Method for inserting noise in digital mammography to simulate reduction in radiation dose
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Vieira, Marcelo A. C.
2015-03-01
The quality of clinical x-ray images is closely related to the radiation dose used in the imaging study. The general principle for selecting the radiation is ALARA ("as low as reasonably achievable"). The practical optimization, however, remains challenging. It is well known that reducing the radiation dose increases the quantum noise, which could compromise the image quality. In order to conduct studies about dose reduction in mammography, it would be necessary to acquire repeated clinical images, from the same patient, with different dose levels. However, such practice would be unethical due to radiation related risks. One solution is to simulate the effects of dose reduction in clinical images. This work proposes a new method, based on the Anscombe transformation, which simulates dose reduction in digital mammography by inserting quantum noise into clinical mammograms acquired with the standard radiation dose. Thus, it is possible to simulate different levels of radiation doses without exposing the patient to new levels of radiation. Results showed that the achieved quality of simulated images generated with our method is the same as when using other methods found in the literature, with the novelty of using the Anscombe transformation for converting signal-independent Gaussian noise into signal-dependent quantum noise.
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Linear programming phase unwrapping for dual-wavelength digital holography.
Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand
2017-01-20
A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-21
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
NASA Astrophysics Data System (ADS)
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-01
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.
Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny
2018-04-16
We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.
Ergodicity of the Stochastic Nosé-Hoover Heat Bath
NASA Astrophysics Data System (ADS)
Wei Chung Lo,; Baowen Li,
2010-07-01
We numerically study the ergodicity of the stochastic Nosé-Hoover heat bath whose formalism is based on the Markovian approximation for the Nosé-Hoover equation [J. Phys. Soc. Jpn. 77 (2008) 103001]. The approximation leads to a Langevin-like equation driven by a fluctuating dissipative force and multiplicative Gaussian white noise. The steady state solution of the associated Fokker-Planck equation is the canonical distribution. We investigate the dynamics of this method for the case of (i) free particle, (ii) nonlinear oscillators and (iii) lattice chains. We derive the Fokker-Planck equation for the free particle and present approximate analytical solution for the stationary distribution in the context of the Markovian approximation. Numerical simulation results for nonlinear oscillators show that this method results in a Gaussian distribution for the particles velocity. We also employ the method as heat baths to study nonequilibrium heat flow in one-dimensional Fermi-Pasta-Ulam (FPU-β) and Frenkel-Kontorova (FK) lattices. The establishment of well-defined temperature profiles are observed only when the lattice size is large. Our results provide numerical justification for such Markovian approximation for classical single- and many-body systems.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
NASA Astrophysics Data System (ADS)
Lylova, A. N.; Sheldakova, Yu. V.; Kudryashov, A. V.; Samarkin, V. V.
2018-01-01
We consider the methods for modelling doughnut and super-Gaussian intensity distributions in the far field by means of deformable bimorph mirrors. A method for the rapid formation of a specified intensity distribution using a Shack - Hartmann sensor is proposed, and the results of the modelling of doughnut and super-Gaussian intensity distributions are presented.
Limits of quantitation - Yet another suggestion
NASA Astrophysics Data System (ADS)
Carlson, Jill; Wysoczanski, Artur; Voigtman, Edward
2014-06-01
The work presented herein suggests that the limit of quantitation concept may be rendered substantially less ambiguous and ultimately more useful as a figure of merit by basing it upon the significant figure and relative measurement error ideas due to Coleman, Auses and Gram, coupled with the correct instantiation of Currie's detection limit methodology. Simple theoretical results are presented for a linear, univariate chemical measurement system with homoscedastic Gaussian noise, and these are tested against both Monte Carlo computer simulations and laser-excited molecular fluorescence experimental results. Good agreement among experiment, theory and simulation is obtained and an easy extension to linearly heteroscedastic Gaussian noise is also outlined.
Theoretical study of sum-frequency vibrational spectroscopy on limonene surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ren-Hui, E-mail: zrh@iccas.ac.cn; Liu, Hao; Jing, Yuan-Yuan
2014-03-14
By combining molecule dynamics (MD) simulation and quantum chemistry computation, we calculate the surface sum-frequency vibrational spectroscopy (SFVS) of R-limonene molecules at the gas-liquid interface for SSP, PPP, and SPS polarization combinations. The distributions of the Euler angles are obtained using MD simulation, the ψ-distribution is between isotropic and Gaussian. Instead of the MD distributions, different analytical distributions such as the δ-function, Gaussian and isotropic distributions are applied to simulate surface SFVS. We find that different distributions significantly affect the absolute SFVS intensity and also influence on relative SFVS intensity, and the δ-function distribution should be used with caution whenmore » the orientation distribution is broad. Furthermore, the reason that the SPS signal is weak in reflected arrangement is discussed.« less
Nonlinear estimation theory applied to the interplanetary orbit determination problem.
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1972-01-01
Martingale theory and appropriate smoothing properties of Loeve (1953) have been used to develop a modified Gaussian second-order filter. The performance of the filter is evaluated through numerical simulation of a Jupiter flyby mission. The observations used in the simulation are on-board measurements of the angle between Jupiter and a fixed star taken at discrete time intervals. In the numerical study, the influence of each of the second-order terms is evaluated. Five filter algorithms are used in the simulations. Four of the filters are the modified Gaussian second-order filter and three approximations derived by neglecting one or more of the second-order terms in the equations. The fifth filter is the extended Kalman-Bucy filter which is obtained by neglecting all of the second-order terms.
Ramasesha, Krupa; De Marco, Luigi; Horning, Andrew D; Mandal, Aritra; Tokmakoff, Andrei
2012-04-07
We present an approach for calculating nonlinear spectroscopic observables, which overcomes the approximations inherent to current phenomenological models without requiring the computational cost of performing molecular dynamics simulations. The trajectory mapping method uses the semi-classical approximation to linear and nonlinear response functions, and calculates spectra from trajectories of the system's transition frequencies and transition dipole moments. It rests on identifying dynamical variables important to the problem, treating the dynamics of these variables stochastically, and then generating correlated trajectories of spectroscopic quantities by mapping from the dynamical variables. This approach allows one to describe non-Gaussian dynamics, correlated dynamics between variables of the system, and nonlinear relationships between spectroscopic variables of the system and the bath such as non-Condon effects. We illustrate the approach by applying it to three examples that are often not adequately treated by existing analytical models--the non-Condon effect in the nonlinear infrared spectra of water, non-Gaussian dynamics inherent to strongly hydrogen bonded systems, and chemical exchange processes in barrier crossing reactions. The methods described are generally applicable to nonlinear spectroscopy throughout the optical, infrared and terahertz regions.
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
High-order space charge effects using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, M.F.; Bruhwiler, D.L.
1997-02-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less
Radiation detector spectrum simulator
Wolf, Michael A.; Crowell, John M.
1987-01-01
A small battery operated nuclear spectrum simulator having a noise source nerates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith generates several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
Radiation detector spectrum simulator
Wolf, M.A.; Crowell, J.M.
1985-04-09
A small battery operated nuclear spectrum simulator having a noise source generates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith to generate several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
ISIM3D: AN ANSI-C THREE-DIMENSIONAL MULTIPLE INDICATOR CONDITIONAL SIMULATION PROGRAM
The indicator conditional simulation technique provides stochastic simulations of a variable that (i) honor the initial data and (ii) can feature a richer family of spatial structures not limited by Gaussianity. he data are encoded into a series of indicators which then are used ...
Non-Gaussian noise-weakened stability in a foraging colony system with time delay
NASA Astrophysics Data System (ADS)
Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong
2018-02-01
In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, M. L.; Liu, B.; Hu, R. H.
In the case of a thin plasma slab accelerated by the radiation pressure of an ultra-intense laser pulse, the development of Rayleigh-Taylor instability (RTI) will destroy the acceleration structure and terminate the acceleration process much sooner than theoretical limit. In this paper, a new scheme using multiple Gaussian pulses for ion acceleration in a radiation pressure acceleration regime is investigated with particle-in-cell simulation. We found that with multiple Gaussian pulses, the instability could be efficiently suppressed and the divergence of the ion bunch is greatly reduced, resulting in a longer acceleration time and much more collimated ion bunch with highermore » energy than using a single Gaussian pulse. An analytical model is developed to describe the suppression of RTI at the laser-plasma interface. The model shows that the suppression of RTI is due to the introduction of the long wavelength mode RTI by the multiple Gaussian pulses.« less
Ince Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Deng, Dongmei; Guo, Qi
2008-07-01
Based on the Snyder-Mitchell model that describes the beam propagation in strongly nonlocal nonlinear media, the close forms of Ince-Gaussian (IG) beams have been found. The transverse structures of the IG beams are described by the product of the Ince polynomials and the Gaussian function. Depending on the input power of the beams, the IG beams can be either a soliton state or a breather state. The IG beams constitute the exact and continuous transition modes between Hermite-Gaussian beams and Laguerre-Gaussian beams. The IG vortex beams can be constructed by a linear combination of the even and odd IG beams. The transverse intensity pattern of IG vortex beams consists of elliptic rings, whose number and ellipticity can be controlled, and a phase displaying a number of in-line vortices, each with a unitary topological charge. The analytical solutions of the IG beams are confirmed by the numerical simulations of the nonlocal nonlinear Schr\\rm \\ddot{o} dinger equation.
Leading non-Gaussian corrections for diffusion orientation distribution function.
Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali
2014-02-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.
Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function
Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali
2014-01-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Matteo A. C., E-mail: matteo.rossi@unimi.it; Paris, Matteo G. A., E-mail: matteo.paris@fisica.unimi.it; CNISM, Unità Milano Statale, I-20133 Milano
2016-01-14
We address the interaction of single- and two-qubit systems with an external transverse fluctuating field and analyze in detail the dynamical decoherence induced by Gaussian noise and random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent von Neumann equation, we analyze in detail the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e., for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where themore » two noises lead to very similar dynamics. We show that the effects of RTN noise and of Gaussian noise are different, i.e., the spectrum alone is not enough to summarize the noise effects, but the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.« less
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
NASA Astrophysics Data System (ADS)
Nishimura, Tomoaki
2016-03-01
A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of 16O(4He, 4He)16O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, W; Farr, J
2015-06-15
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
Non-Gaussian behavior in jamming / unjamming transition in dense granular materials
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.
2013-06-01
Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.
Current-induced instability of domain walls in cylindrical nanowires
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Zhang, Zhaoyang; Pepper, Ryan A.; Mu, Congpu; Zhou, Yan; Fangohr, Hans
2018-01-01
We study the current-driven domain wall (DW) motion in cylindrical nanowires using micromagnetic simulations by implementing the Landau-Lifshitz-Gilbert equation with nonlocal spin-transfer torque in a finite difference micromagnetic package. We find that in the presence of DW, Gaussian wave packets (spin waves) will be generated when the charge current is suddenly applied to the system. This effect is excluded when using the local spin-transfer torque. The existence of spin waves emission indicates that transverse domain walls can not move arbitrarily fast in cylindrical nanowires although they are free from the Walker limit. We establish an upper velocity limit for DW motion by analyzing the stability of Gaussian wave packets using the local spin-transfer torque. Micromagnetic simulations show that the stable region obtained by using nonlocal spin-transfer torque is smaller than that by using its local counterpart. This limitation is essential for multiple DWs since the instability of Gaussian wave packets will break the structure of multiple DWs.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Hermite-Gaussian beams with self-forming spiral phase distribution
NASA Astrophysics Data System (ADS)
Zinchik, Alexander A.; Muzychenko, Yana B.
2014-05-01
Spiral laser beams is a family of laser beams that preserve the structural stability up to scale and rotate with the propagation. Properties of spiral beams are of practical interest for laser technology, medicine and biotechnology. Researchers use a spiral beams for movement and manipulation of microparticles. Spiral beams have a complicated phase distribution in cross section. This paper describes the results of analytical and computer simulation of Hermite-Gaussian beams with self-forming spiral phase distribution. In the simulation used a laser beam consisting of the sum of the two modes HG TEMnm and TEMn1m1. The coefficients n1, n, m1, m were varied. Additional phase depending from the coefficients n, m, m1, n1 imposed on the resulting beam. As a result, formed the Hermite Gaussian beam phase distribution which takes the form of a spiral in the process of distribution. For modeling was used VirtualLab 5.0 (manufacturer LightTrans GmbH).
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
Robust Tracking of Small Displacements with a Bayesian Estimator
Dumont, Douglas M.; Byram, Brett C.
2016-01-01
Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761
Diabat Interpolation for Polymorph Free-Energy Differences.
Kamat, Kartik; Peters, Baron
2017-02-02
Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.
Parameter estimates in binary black hole collisions using neural networks
NASA Astrophysics Data System (ADS)
Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.
2016-10-01
We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Liu, Chengyu; Zhao, Lina; Liu, Changchun
2014-01-01
An early return of the reflected component in the arterial pulse has been recognized as an important indicator of cardiovascular risk. This study aimed to determine the effects of blood pressure and sex factor on the change of wave reflection using Gaussian fitting method. One hundred and ninety subjects were enrolled. They were classified into four blood pressure categories based on the systolic blood pressures (i.e., ≤ 110, 111-120, 121-130 and ≥ 131 mmHg). Each blood pressure category was also stratified for sex factor. Electrocardiogram (ECG) and radial artery pressure waveforms (RAPW) signals were recorded for each subject. Ten consecutive pulse episodes from the RAPW signal were extracted and normalized. Each normalized pulse episode was fitted by three Gaussian functions. Both the peak position and peak height of the first and second Gaussian functions, as well as the peak position interval and peak height ratio, were used as the evaluation indices of wave reflection. Two-way ANOVA results showed that with the increased blood pressure, the peak position of the second Gaussian significantly shorten (P < 0.01), the peak height of the first Gaussian significantly decreased (P < 0.01) and the peak height of the second Gaussian significantly increased (P < 0.01), inducing the significantly decreased peak position interval and significantly increased peak height ratio (both P < 0.01). Sex factor had no significant effect on all evaluation indices (all P > 0.05). Moreover, the interaction between sex and blood pressure factors also had no significant effect on all evaluation indices (all P > 0.05). These results showed that blood pressure has significant effect on the change of wave reflection when using the recently developed Gaussian fitting method, whereas sex has no significant effect. The results also suggested that the Gaussian fitting method could be used as a new approach for assessing the arterial wave reflection.
A self-reference PRF-shift MR thermometry method utilizing the phase gradient
NASA Astrophysics Data System (ADS)
Langley, Jason; Potter, William; Phipps, Corey; Huang, Feng; Zhao, Qun
2011-12-01
In magnetic resonance (MR) imaging, the most widely used and accurate method for measuring temperature is based on the shift in proton resonance frequency (PRF). However, inter-scan motion and bulk magnetic field shifts can lead to inaccurate temperature measurements in the PRF-shift MR thermometry method. The self-reference PRF-shift MR thermometry method was introduced to overcome such problems by deriving a reference image from the heated or treated image, and approximates the reference phase map with low-order polynomial functions. In this note, a new approach is presented to calculate the baseline phase map in self-reference PRF-shift MR thermometry. The proposed method utilizes the phase gradient to remove the phase unwrapping step inherent to other self-reference PRF-shift MR thermometry methods. The performance of the proposed method was evaluated using numerical simulations with temperature distributions following a two-dimensional Gaussian function as well as phantom and in vivo experimental data sets. The results from both the numerical simulations and experimental data show that the proposed method is a promising technique for measuring temperature.
Analysis of non-Gaussian laser mode guidance and evolution in leaky plasma channels
NASA Astrophysics Data System (ADS)
Djordjevic, Blagoje; Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim
2016-10-01
The evolution and propagation of a non-Gaussian laser pulse under varying circumstances, including a typical matched parabolic channel as well as leaky channels, are investigated. It has previously been shown for a Gaussian pulse that matched guiding can be achieved using parabolic plasma channels. In the low power regime, it can be shown directly that for multi-mode pulses there is significant transverse beating. Given the adverse behavior of non-Gaussian pulses in traditional guiding designs, we examine the use of leaky channels to filter out higher modes as a means of optimizing laser conditions. The interaction between different modes can have an adverse effect on the laser pulse as it propagates through the primary channel. To improve guiding of the pulse we propose using leaky channels. Realistic plasma channel profiles are considered. Higher order mode content is lost through the leaky channel, while the fundamental mode remains well-guided. This is demonstrated using both numerical simulations as well as the source-dependent Laguerre-Gaussian modal expansion. In conclusion, an idealized plasma lens based on leaky channels is found to filter out the higher order modes and leave a near-Gaussian profile before the pulse enters the primary channel.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C
2014-08-01
The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
Precise Determination of the Absorption Maximum in Wide Bands
ERIC Educational Resources Information Center
Eriksson, Karl-Hugo; And Others
1977-01-01
A precise method of determining absorption maxima where Gaussian functions occur is described. The method is based on a logarithmic transformation of the Gaussian equation and is suited for a mini-computer. (MR)
Richings, Gareth W; Habershon, Scott
2017-09-12
We describe a method for performing nuclear quantum dynamics calculations using standard, grid-based algorithms, including the multiconfiguration time-dependent Hartree (MCTDH) method, where the potential energy surface (PES) is calculated "on-the-fly". The method of Gaussian process regression (GPR) is used to construct a global representation of the PES using values of the energy at points distributed in molecular configuration space during the course of the wavepacket propagation. We demonstrate this direct dynamics approach for both an analytical PES function describing 3-dimensional proton transfer dynamics in malonaldehyde and for 2- and 6-dimensional quantum dynamics simulations of proton transfer in salicylaldimine. In the case of salicylaldimine we also perform calculations in which the PES is constructed using Hartree-Fock calculations through an interface to an ab initio electronic structure code. In all cases, the results of the quantum dynamics simulations are in excellent agreement with previous simulations of both systems yet do not require prior fitting of a PES at any stage. Our approach (implemented in a development version of the Quantics package) opens a route to performing accurate quantum dynamics simulations via wave function propagation of many-dimensional molecular systems in a direct and efficient manner.
Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin
2016-10-01
An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.
A new method for generating a hollow Gaussian beam
NASA Astrophysics Data System (ADS)
Wei, Cun; Lu, Xingyuan; Wu, Gaofeng; Wang, Fei; Cai, Yangjian
2014-04-01
Hollow Gaussian beam (HGB) was introduced 10 years ago (Cai et al. in Opt Lett 28:1084, 2003). In this paper, we introduce a new method for generating a HGB through transforming a Laguerre-Gaussian beam with radial index 0 and azimuthal index l into a HGB with mode n = l/2. Furthermore, we report experimental generation of a HGB based on the proposed method, and we carry out experimental study of the focusing properties of the generated HGB. Our experimental results agree well with the theoretical predictions.
Instabilities and Turbulence Generation by Pick-Up Ion Distributions in the Outer Heliosheath
NASA Astrophysics Data System (ADS)
Weichman, K.; Roytershteyn, V.; Delzanno, G. L.; Pogorelov, N.
2017-12-01
Pick-up ions (PUIs) play a significant role in the dynamics of the heliosphere. One problem that has attracted significant attention is the stability of ring-like distributions of PUIs and the electromagnetic fluctuations that could be generated by PUI distributions. For example, PUI stability is relevant to theories attempting to identify the origins of the IBEX ribbon. PUIs have previously been investigated by linear stability analysis of model (e.g. Gaussian) rings and corresponding computer simulations. The majority of these simulations utilized particle-in-cell methods which suffer from accuracy limitations imposed by the statistical noise associated with representing the plasma by a relatively small number of computational particles. In this work, we utilize highly accurate spectral Vlasov simulations conducted using the fully kinetic implicit code SPS (Spectral Plasma Solver) to investigate the PUI distributions inferred from a global heliospheric model (Heerikhuisen et al., 2016). Results are compared with those obtained by hybrid and fully kinetic particle-in-cell methods.
An Equivalent Fracture Modeling Method
NASA Astrophysics Data System (ADS)
Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun
2017-12-01
3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
Dynamics of dark hollow Gaussian laser pulses in relativistic plasma.
Sharma, A; Misra, S; Mishra, S K; Kourakis, I
2013-06-01
Optical beams with null central intensity have potential applications in the field of atom optics. The spatial and temporal evolution of a central shadow dark hollow Gaussian (DHG) relativistic laser pulse propagating in a plasma is studied in this article for first principles. A nonlinear Schrodinger-type equation is obtained for the beam spot profile and then solved numerically to investigate the pulse propagation characteristics. As series of numerical simulations are employed to trace the profile of the focused and compressed DHG laser pulse as it propagates through the plasma. The theoretical and simulation results predict that higher-order DHG pulses show smaller divergence as they propagate and, thus, lead to enhanced energy transport.
Dynamics of dark hollow Gaussian laser pulses in relativistic plasma
NASA Astrophysics Data System (ADS)
Sharma, A.; Misra, S.; Mishra, S. K.; Kourakis, I.
2013-06-01
Optical beams with null central intensity have potential applications in the field of atom optics. The spatial and temporal evolution of a central shadow dark hollow Gaussian (DHG) relativistic laser pulse propagating in a plasma is studied in this article for first principles. A nonlinear Schrodinger-type equation is obtained for the beam spot profile and then solved numerically to investigate the pulse propagation characteristics. As series of numerical simulations are employed to trace the profile of the focused and compressed DHG laser pulse as it propagates through the plasma. The theoretical and simulation results predict that higher-order DHG pulses show smaller divergence as they propagate and, thus, lead to enhanced energy transport.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
A new method for the identification of non-Gaussian line profiles in elliptical galaxies
NASA Technical Reports Server (NTRS)
Van Der Marel, Roeland P.; Franx, Marijn
1993-01-01
A new parameterization for the line profiles of elliptical galaxies, the Gauss-Hermite series, is proposed. This approach expands the line profile as a sum of orthogonal functions which minimizes the correlations between the errors in the parameters of the fit. This method also make use of the fact that Gaussians provide good low-order fits to observed line profiles. The method yields measurements of the line strength, mean radial velocity, and the velocity dispersion as well as two extra parameters, h3 and h4, that measure asymmetric and symmetric deviations of the line profiles from a Gaussian, respectively. The new method was used to derive profiles for three elliptical galaxies which all have asymmetric line profiles on the major axis with symmetric deviations from a Gaussian. Results confirm that elliptical galaxies have complex structures due to their complex formation history.
Novel transform for image description and compression with implementation by neural architectures
NASA Astrophysics Data System (ADS)
Ben-Arie, Jezekiel; Rao, Raghunath K.
1991-10-01
A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.
NASA Astrophysics Data System (ADS)
Watkinson, Catherine A.; Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh
2017-12-01
In this paper, we establish the accuracy and robustness of a fast estimator for the bispectrum - the 'FFT-bispectrum estimator'. The implementation of the estimator presented here offers speed and simplicity benefits over a direct-measurement approach. We also generalize the derivation so it may be easily be applied to any order polyspectra, such as the trispectrum, with the cost of only a handful of Fast-Fourier Transforms (FFTs). All lower order statistics can also be calculated simultaneously for little extra cost. To test the estimator, we make use of a non-linear density field, and for a more strongly non-Gaussian test case, we use a toy-model of reionization in which ionized bubbles at a given redshift are all of equal size and are randomly distributed. Our tests find that the FFT-estimator remains accurate over a wide range of k, and so should be extremely useful for analysis of 21-cm observations. The speed of the FFT-bispectrum estimator makes it suitable for sampling applications, such as Bayesian inference. The algorithm we describe should prove valuable in the analysis of simulations and observations, and whilst, we apply it within the field of cosmology, this estimator is useful in any field that deals with non-Gaussian data.
Quadriphase DS-CDMA wireless communication systems employing the generalized detector
NASA Astrophysics Data System (ADS)
Tuzlukov, Vyacheslav
2012-05-01
Probability of bit-error Per performance of asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless communication systems employing the generalized detector (GD) constructed based on the generalized approach to signal processing in noise is analyzed. The effects of pulse shaping, quadriphase or direct sequence quadriphase shift keying (DS-QPSK) spreading, aperiodic spreading sequences are considered in DS-CDMA based on GD and compared with the coherent Neyman-Pearson receiver. An exact Per expression and several approximations: one using the characterristic function method, a simplified expression for the improved Gaussian approximation (IGA) and the simplified improved Gaussian approximation are derived. Under conditions typically satisfied in practice and even with a small number of interferers, the standard Gaussian approximation (SGA) for the multiple-access interference component of the GD statistic and Per performance is shown to be accurate. Moreover, the IGA is shown to reduce to the SGA for pulses with zero excess bandwidth. Second, the GD Per performance of quadriphase DS-CDMA is shown to be superior to that of bi-phase DS-CDMA. Numerical examples by Monte Carlo simulation are presented to illustrate the GD Per performance for square-root raised-cosine pulses and spreading factors of moderate to large values. Also, a superiority of GD employment in CDMA systems over the Neyman-Pearson receiver is demonstrated
Hou, Bowen; He, Zhangming; Li, Dong; Zhou, Haiyin; Wang, Jiongqi
2018-05-27
Strap-down inertial navigation system/celestial navigation system ( SINS/CNS) integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline
NASA Technical Reports Server (NTRS)
Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore
2017-01-01
We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.
Werhli, Adriano V; Grzegorczyk, Marco; Husmeier, Dirk
2006-10-15
An important problem in systems biology is the inference of biochemical pathways and regulatory networks from postgenomic data. Various reverse engineering methods have been proposed in the literature, and it is important to understand their relative merits and shortcomings. In the present paper, we compare the accuracy of reconstructing gene regulatory networks with three different modelling and inference paradigms: (1) Relevance networks (RNs): pairwise association scores independent of the remaining network; (2) graphical Gaussian models (GGMs): undirected graphical models with constraint-based inference, and (3) Bayesian networks (BNs): directed graphical models with score-based inference. The evaluation is carried out on the Raf pathway, a cellular signalling network describing the interaction of 11 phosphorylated proteins and phospholipids in human immune system cells. We use both laboratory data from cytometry experiments as well as data simulated from the gold-standard network. We also compare passive observations with active interventions. On Gaussian observational data, BNs and GGMs were found to outperform RNs. The difference in performance was not significant for the non-linear simulated data and the cytoflow data, though. Also, we did not observe a significant difference between BNs and GGMs on observational data in general. However, for interventional data, BNs outperform GGMs and RNs, especially when taking the edge directions rather than just the skeletons of the graphs into account. This suggests that the higher computational costs of inference with BNs over GGMs and RNs are not justified when using only passive observations, but that active interventions in the form of gene knockouts and over-expressions are required to exploit the full potential of BNs. Data, software and supplementary material are available from http://www.bioss.sari.ac.uk/staff/adriano/research.html
Casas, F J; Pascual, J P; de la Fuente, M L; Artal, E; Portilla, J
2010-07-01
This paper describes a comparative nonlinear analysis of low-noise amplifiers (LNAs) under different stimuli for use in astronomical applications. Wide-band Gaussian-noise input signals, together with the high values of gain required, make that figures of merit, such as the 1 dB compression (1 dBc) point of amplifiers, become crucial in the design process of radiometric receivers in order to guarantee the linearity in their nominal operation. The typical method to obtain the 1 dBc point is by using single-tone excitation signals to get the nonlinear amplitude to amplitude (AM-AM) characteristic but, as will be shown in the paper, in radiometers, the nature of the wide-band Gaussian-noise excitation signals makes the amplifiers present higher nonlinearity than when using single tone excitation signals. Therefore, in order to analyze the suitability of the LNA's nominal operation, the 1 dBc point has to be obtained, but using realistic excitation signals. In this work, an analytical study of compression effects in amplifiers due to excitation signals composed of several tones is reported. Moreover, LNA nonlinear characteristics, as AM-AM, total distortion, and power to distortion ratio, have been obtained by simulation and measurement with wide-band Gaussian-noise excitation signals. This kind of signal can be considered as a limit case of a multitone signal, when the number of tones is very high. The work is illustrated by means of the extraction of realistic nonlinear characteristics, through simulation and measurement, of a 31 GHz back-end module LNA used in the radiometer of the QUIJOTE (Q U I JOint TEnerife) CMB experiment.
Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.
Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian
2017-01-05
Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.
NASA Astrophysics Data System (ADS)
Yu, Haitao; Sun, Hui; Shen, Jianqi; Tropea, Cameron
2018-03-01
The primary rainbow observed when light is scattered by a spherical drop has been exploited in the past to measure drop size and relative refractive index. However, if higher spatial resolution is required in denser drop ensembles/sprays, and to avoid then multiple drops simultaneously appearing in the measurement volume, a highly focused beam is desirable, inevitably with a Gaussian intensity profile. The present study examines the primary rainbow pattern resulting when a Gaussian beam is scattered by a spherical drop and estimates the attainable accuracy when extracting size and refractive index. The scattering is computed using generalized Lorenz-Mie theory (GLMT) and Debye series decomposition of the Gaussian beam scattering. The results of these simulations show that the measurement accuracy is dependent on both the beam waist radius and the position of the drop in the beam waist.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Behnke, marlana N.; Przekop, Adam
2010-01-01
High-cycle fatigue of an elastic-plastic beam structure under the combined action of thermal and high-intensity non-Gaussian acoustic loadings is considered. Such loadings can be highly damaging when snap-through motion occurs between thermally post-buckled equilibria. The simulated non-Gaussian loadings investigated have a range of skewness and kurtosis typical of turbulent boundary layer pressure fluctuations in the vicinity of forward facing steps. Further, the duration and steadiness of high excursion peaks is comparable to that found in such turbulent boundary layer data. Response and fatigue life estimates are found to be insensitive to the loading distribution, with the minor exception of cases involving plastic deformation. In contrast, the fatigue life estimate was found to be highly affected by a different type of non-Gaussian loading having bursts of high excursion peaks.
NASA Astrophysics Data System (ADS)
Mitchell, Noah; Koning, Vinzenz; Vitelli, Vincenzo; Irvine, William T. M.
2014-03-01
When an elastic film conforms to a surface with Gaussian curvature, stresses arise in the film. As a result, cracks--typically studied in flat materials--interact with curvature when propagating through the system. Using silicone elastomer sheets that conform to the surface of a Gaussian bump, we find experimental evidence for the deflection of a crack propagating through the material. We interpret our experiments with reference to analytical modeling and simulations of a simplified model system.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Real-time simulator for designing electron dual scattering foil systems.
Carver, Robert L; Hogstrom, Kenneth R; Price, Michael J; LeBlanc, Justin D; Pitcher, Garrett M
2014-11-08
The purpose of this work was to develop a user friendly, accurate, real-time com- puter simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator allows for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator also is a powerful educational tool. The simulator consists of an analytical algorithm for calculating electron fluence and X-ray dose and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with the reduced Gaussian formalism for scattering powers. The simulator also estimates central-axis and off-axis X-ray dose arising from the dual foil system. Once the geometry of the accelerator is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scat- tering foil material and Gaussian shape (thickness and sigma), and beam energy. The off-axis electron relative fluence or total dose profile and central-axis X-ray dose contamination are computed and displayed in real time. The simulator was validated by comparison of off-axis electron relative fluence and X-ray percent dose profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV, using present foils on an Elekta radiotherapy accelerator, the simulator was able to reproduce MC profiles to within 2% out to 20 cm from the central axis. The central-axis X-ray percent dose predictions matched measured data to within 0.5%. The calculation time was approximately 100 ms using a single Intel 2.93 GHz processor, which allows for real-time variation of foil geometrical parameters using slider bars. This work demonstrates how the user-friendly GUI and real-time nature of the simulator make it an effective educational tool for gaining a better understanding of the effects that various system parameters have on a relative dose profile. This work also demonstrates a method for using the simulator as a design tool for creating custom dual scattering foil systems in the clinical range of beam energies (6-20 MeV).
NASA Astrophysics Data System (ADS)
Troncossi, M.; Di Sante, R.; Rivola, A.
2016-10-01
In the field of vibration qualification testing, random excitations are typically imposed on the tested system in terms of a power spectral density (PSD) profile. This is the one of the most popular ways to control the shaker or slip table for durability tests. However, these excitations (and the corresponding system responses) exhibit a Gaussian probability distribution, whereas not all real-life excitations are Gaussian, causing the response to be also non-Gaussian. In order to introduce non-Gaussian peaks, a further parameter, i.e., kurtosis, has to be controlled in addition to the PSD. However, depending on the specimen behaviour and input signal characteristics, the use of non-Gaussian excitations with high kurtosis and a given PSD does not automatically imply a non-Gaussian stress response. For an experimental investigation of these coupled features, suitable measurement methods need to be developed in order to estimate the stress amplitude response at critical failure locations and consequently evaluate the input signals most representative for real-life, non-Gaussian excitations. In this paper, a simple test rig with a notched cantilevered specimen was developed to measure the response and examine the kurtosis values in the case of stationary Gaussian, stationary non-Gaussian, and burst non-Gaussian excitation signals. The laser Doppler vibrometry technique was used in this type of test for the first time, in order to estimate the specimen stress amplitude response as proportional to the differential displacement measured at the notch section ends. A method based on the use of measurements using accelerometers to correct for the occasional signal dropouts occurring during the experiment is described. The results demonstrate the ability of the test procedure to evaluate the output signal features and therefore to select the most appropriate input signal for the fatigue test.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Kai; Song, Linze; Shi, Qiang, E-mail: qshi@iccas.ac.cn
Based on the path integral approach, we derive a new realization of the exact non-Markovian stochastic Schrödinger equation (SSE). The main difference from the previous non-Markovian quantum state diffusion (NMQSD) method is that the complex Gaussian stochastic process used for the forward propagation of the wave function is correlated, which may be used to reduce the amplitude of the non-Markovian memory term at high temperatures. The new SSE is then written into the recently developed hierarchy of pure states scheme, in a form that is more closely related to the hierarchical equation of motion approach. Numerical simulations are then performedmore » to demonstrate the efficiency of the new method.« less
Onto the stability analysis of hyperbolic secant-shaped Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Sabari, S.; Murali, R.
2018-05-01
We analyze the stability of the hyperbolic secant-shaped attractive Bose-Einstein condensate in the absence of external trapping potential. The appropriate theoretical model for the system is described by the nonlinear mean-field Gross-Pitaevskii equation with time varying two-body interaction effects. Using the variational method, the stability of the system is analyzed under the influence of time varying two-body interactions. Further we confirm that the stability of the attractive condensate increases by considering the hyperbolic secant-shape profile instead of Gaussian shape. The analytical results are compared with the numerical simulation by employing the split-step Crank-Nicholson method.
Self tuning system for industrial surveillance
Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.
2000-01-01
A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.
Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions
NASA Astrophysics Data System (ADS)
Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.
2015-12-01
Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
Truncated Gaussians as tolerance sets
NASA Technical Reports Server (NTRS)
Cozman, Fabio; Krotkov, Eric
1994-01-01
This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-Tossas, L. A.; Churchfield, M. J.; Meneveau, C.
The actuator line model (ALM) is a commonly used method to represent lifting surfaces such as wind turbine blades within large-eddy simulations (LES). In the ALM, the lift and drag forces are replaced by an imposed body force that is typically smoothed over several grid points using a Gaussian kernel with some prescribed smoothing width e. To date, the choice of e has most often been based on numerical considerations related to the grid spacing used in LES. However, especially for finely resolved LES with grid spacings on the order of or smaller than the chord length of the blade,more » the best choice of e is not known. In this work, a theoretical approach is followed to determine the most suitable value of e, based on an analytical solution to the linearized inviscid flow response to a Gaussian force. We find that the optimal smoothing width eopt is on the order of 14%-25% of the chord length of the blade, and the center of force is located at about 13%-26% downstream of the leading edge of the blade for the cases considered. These optimal values do not depend on angle of attack and depend only weakly on the type of lifting surface. It is then shown that an even more realistic velocity field can be induced by a 2-D elliptical Gaussian lift-force kernel. Some results are also provided regarding drag force representation.« less
Richardson, Magnus J E; Gerstner, Wulfram
2005-04-01
The subthreshold membrane voltage of a neuron in active cortical tissue is a fluctuating quantity with a distribution that reflects the firing statistics of the presynaptic population. It was recently found that conductance-based synaptic drive can lead to distributions with a significant skew. Here it is demonstrated that the underlying shot noise caused by Poissonian spike arrival also skews the membrane distribution, but in the opposite sense. Using a perturbative method, we analyze the effects of shot noise on the distribution of synaptic conductances and calculate the consequent voltage distribution. To first order in the perturbation theory, the voltage distribution is a gaussian modulated by a prefactor that captures the skew. The gaussian component is identical to distributions derived using current-based models with an effective membrane time constant. The well-known effective-time-constant approximation can therefore be identified as the leading-order solution to the full conductance-based model. The higher-order modulatory prefactor containing the skew comprises terms due to both shot noise and conductance fluctuations. The diffusion approximation misses these shot-noise effects implying that analytical approaches such as the Fokker-Planck equation or simulation with filtered white noise cannot be used to improve on the gaussian approximation. It is further demonstrated that quantities used for fitting theory to experiment, such as the voltage mean and variance, are robust against these non-Gaussian effects. The effective-time-constant approximation is therefore relevant to experiment and provides a simple analytic base on which other pertinent biological details may be added.
Spatial interpolation of forest conditions using co-conditional geostatistical simulation
H. Todd Mowrer
2000-01-01
In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...
Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.
Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-01-04
Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.
Zhou, Ji; He, Zhihong; Ma, Yu; Dong, Shikui
2014-09-20
This paper discusses Gaussian laser transmission in double-refraction crystal whose incident light wavelength is within its absorption wave band. Two scenarios for coupled radiation and heat conduction are considered: one is provided with an applied external electric field, the other is not. A circular heat source with a Gaussian energy distribution is introduced to present the crystal's light-absorption process. The electromagnetic field frequency domain analysis equation and energy equation are solved to simulate the phenomenon by using the finite element method. It focuses on the influence of different values such as wavelength, incident light intensity, heat transfer coefficient, ambient temperature, crystal thickness, and applied electric field strength. The results show that the refraction index of polarized light increases with the increase of crystal temperature. It decreases as the strength of the applied electric field increases if it is positive. The mechanism of electrical modulation for the thermo-optical effect is used to keep the polarized light's index of refraction constant in our simulation. The quantitative relation between thermal boundary condition and strength of applied electric field during electrical modulation is determined. Numerical results indicate a possible approach to removing adverse thermal effects such as depolarization and wavefront distortion, which are caused by thermal deposition during linear laser absorption.
Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro
2008-03-01
We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
PET image reconstruction using multi-parametric anato-functional priors
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.
Extremality of Gaussian quantum states.
Wolf, Michael M; Giedke, Geza; Cirac, J Ignacio
2006-03-03
We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.
Modified Gaussian influence function of deformable mirror actuators.
Huang, Linhai; Rao, Changhui; Jiang, Wenhan
2008-01-07
A new deformable mirror influence function based on a Gaussian function is introduced to analyze the fitting capability of a deformable mirror. The modified expressions for both azimuthal and radial directions are presented based on the analysis of the residual error between a measured influence function and a Gaussian influence function. With a simplex search method, we further compare the fitting capability of our proposed influence function to fit the data produced by a Zygo interferometer with that of a Gaussian influence function. The result indicates that the modified Gaussian influence function provides much better performance in data fitting.
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
NASA Astrophysics Data System (ADS)
Shukri, Seyfan Kelil
2017-01-01
We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.
A comparison of heuristic and model-based clustering methods for dietary pattern analysis.
Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia
2016-02-01
Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.
NASA Astrophysics Data System (ADS)
Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne
2017-03-01
The development and use of dispersion models that simulate traffic-related air pollution in urban areas has risen significantly in support of air pollution exposure research. In order to accurately estimate population exposure, it is important to generate concentration surfaces that take into account near-road concentrations as well as the transport of pollutants throughout an urban region. In this paper, an integrated modelling chain was developed to simulate ambient Nitrogen Dioxide (NO2) in a dense urban neighbourhood while taking into account traffic emissions, the regional background, and the transport of pollutants within the urban canopy. For this purpose, we developed a hybrid configuration including 1) a street canyon model, which simulates pollutant transfer along streets and intersections, taking into account the geometry of buildings and other obstacles, and 2) a Gaussian puff model, which resolves the transport of contaminants at the top of the urban canopy and accounts for regional meteorology. Each dispersion model was validated against measured concentrations and compared against the hybrid configuration. Our results demonstrate that the hybrid approach significantly improves the output of each model on its own. An underestimation appears clearly for the Gaussian model and street-canyon model compared to observed data. This is due to ignoring the building effect by the Gaussian model and undermining the contribution of other roads by the canyon model. The hybrid approach reduced the RMSE (of observed vs. predicted concentrations) by 16%-25% compared to each model on its own, and increased FAC2 (fraction of predictions within a factor of two of the observations) by 10%-34%.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
A theoretical study on 2-amino-5-nitroprydinium trifluoroaceta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arioğlu, Çağla, E-mail: caglaarioglu@gmail.com; Tamer, Ömer, E-mail: omertamer@sakarya.edu.tr; Başoğlu, Adil, E-mail: abasoglu@sakarya.edu.tr
The geometry optimization of 2-amino-5-nitroprydinium trifluoroacetate molecule was carried out by using Becke’s three-parameter exchange functional in conjunction with the Lee-Yang-Parr correlation functional (B3LYP) level of density functional theory (DFT) and 6-311++G(d,p) basis set at GAUSSIAN 09 program. The vibration spectrum of the title compound was simulated to predict the presence of functional groups and their vibrational modes. The highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies were calculated at the same level, and the obtained small energy gap shows that charge transfer occurs in the title compound. The molecular dipole moment, polarizability and hyperpolarizability parametersmore » were determined to evaluate nonlinear optical efficiency of the title compound. Finally, the {sup 13}C and {sup 1}H Nuclear Magnetic Resonance (NMR) chemical shift values were calculated by the application of the gauge independent atomic orbital (GIAO) method. All of the calculations were carried out by using GAUSSIAN 09 program.« less
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho
2007-08-20
In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.
Rubber elasticity for percolation network consisting of Gaussian chains.
Nishi, Kengo; Noguchi, Hiroshi; Sakai, Takamasa; Shibayama, Mitsuhiro
2015-11-14
A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G0, must be equal to G/G0 = (p - 2/f)/(1 - 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.
Electric Field Fluctuations in Water
NASA Astrophysics Data System (ADS)
Thorpe, Dayton; Limmer, David; Chandler, David
2013-03-01
Charge transfer in solution, such as autoionization and ion pair dissociation in water, is governed by rare electric field fluctuations of the solvent. Knowing the statistics of such fluctuations can help explain the dynamics of these rare events. Trajectories short enough to be tractable by computer simulation are virtually certain not to sample the large fluctuations that promote rare events. Here, we employ importance sampling techniques with classical molecular dynamics simulations of liquid water to study statistics of electric field fluctuations far from their means. We find that the distributions of electric fields located on individual water molecules are not in general gaussian. Near the mean this non-gaussianity is due to the internal charge distribution of the water molecule. Further from the mean, however, there is a previously unreported Bjerrum-like defect that stabilizes certain large fluctuations out of equilibrium. As expected, differences in electric fields acting between molecules are gaussian to a remarkable degree. By studying these differences, though, we are able to determine what configurations result not only in large electric fields, but also in electric fields with long spatial correlations that may be needed to promote charge separation.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Spatiotemporal Airy Ince-Gaussian wave packets in strongly nonlocal nonlinear media.
Peng, Xi; Zhuang, Jingli; Peng, Yulian; Li, DongDong; Zhang, Liping; Chen, Xingyu; Zhao, Fang; Deng, Dongmei
2018-03-08
The self-accelerating Airy Ince-Gaussian (AiIG) and Airy helical Ince-Gaussian (AihIG) wave packets in strongly nonlocal nonlinear media (SNNM) are obtained by solving the strongly nonlocal nonlinear Schrödinger equation. For the first time, the propagation properties of three dimensional localized AiIG and AihIG breathers and solitons in the SNNM are demonstrated, these spatiotemporal wave packets maintain the self-accelerating and approximately non-dispersion properties in temporal dimension, periodically oscillating (breather state) or steady (soliton state) in spatial dimension. In particular, their numerical experiments of spatial intensity distribution, numerical simulations of spatiotemporal distribution, as well as the transverse energy flow and the angular momentum in SNNM are presented. Typical examples of the obtained solutions are based on the ratio between the input power and the critical power, the ellipticity and the strong nonlocality parameter. The comparisons of analytical solutions with numerical simulations and numerical experiments of the AiIG and AihIG optical solitons show that the numerical results agree well with the analytical solutions in the case of strong nonlocality.
Rubber elasticity for percolation network consisting of Gaussian chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishi, Kengo, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp; Noguchi, Hiroshi; Shibayama, Mitsuhiro, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp
2015-11-14
A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G{sub 0}, must be equal to G/G{sub 0} = (p − 2/f)/(1 − 2/f) if the position of sites can be determined somore » as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.« less
Effect of asymmetric concentration profile on thermal conductivity in Ge/SiGe superlattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Konstanze R., E-mail: konstanze.hahn@dsf.unica.it; Cecchi, Stefano; Colombo, Luciano
2016-05-16
The effect of the chemical composition in Si/Ge-based superlattices on their thermal conductivity has been investigated using molecular dynamics simulations. Simulation cells of Ge/SiGe superlattices have been generated with different concentration profiles such that the Si concentration follows a step-like, a tooth-saw, a Gaussian, and a gamma-type function in direction of the heat flux. The step-like and tooth-saw profiles mimic ideally sharp interfaces, whereas Gaussian and gamma-type profiles are smooth functions imitating atomic diffusion at the interface as obtained experimentally. Symmetry effects have been investigated comparing the symmetric profiles of the step-like and the Gaussian function to the asymmetric profilesmore » of the tooth-saw and the gamma-type function. At longer sample length and similar degree of interdiffusion, the thermal conductivity is found to be lower in asymmetric profiles. Furthermore, it is found that with smooth concentration profiles where atomic diffusion at the interface takes place the thermal conductivity is higher compared to systems with atomically sharp concentration profiles.« less
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems
NASA Astrophysics Data System (ADS)
Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain
2016-08-01
In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.