Sample records for inverse gaussian function

  1. Stable Lévy motion with inverse Gaussian subordinator

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Gajda, J.

    2017-09-01

    In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.

  2. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  3. A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering

    NASA Astrophysics Data System (ADS)

    Milder, A. L.; Froula, D. H.

    2017-10-01

    A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2 Zvosc2 vth2 > 1 , vth2 > 1 , the inverse bremsstrahlung heating rate is sufficiently fast to compete with electron-electron collisions. This process preferentially heats the subthermal electrons leading to super-Gaussian distribution functions. A method to identify the super-Gaussian order of the distribution functions in these plasmas using collective Thomson scattering will be proposed. By measuring the collective Thomson spectra over a range of angles the density, temperature and super-Gaussian order can be determined. This is accomplished by fitting non-Maxwellian distribution data with a super-Gaussian model; in order to match the density and electron temperature to within 10%, the super-Gaussian order must be varied. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  4. Non-Gaussianity in a quasiclassical electronic circuit

    NASA Astrophysics Data System (ADS)

    Suzuki, Takafumi J.; Hayakawa, Hisao

    2017-05-01

    We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.

  5. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  6. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  7. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  8. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  9. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  10. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-07-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  11. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  12. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  13. Bayesian Inference in Satellite Gravity Inversion

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.

    2005-01-01

    To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.

  14. Efficient evaluation of Coulomb integrals in a mixed Gaussian and plane-wave basis using the density fitting and Cholesky decomposition.

    PubMed

    Čársky, Petr; Čurík, Roman; Varga, Štefan

    2012-03-21

    The objective of this paper is to show that the density fitting (resolution of the identity approximation) can also be applied to Coulomb integrals of the type (k(1)(1)k(2)(1)|g(1)(2)g(2)(2)), where k and g symbols refer to plane-wave functions and gaussians, respectively. We have shown how to achieve the accuracy of these integrals that is needed in wave-function MO and density functional theory-type calculations using mixed Gaussian and plane-wave basis sets. The crucial issues for achieving such a high accuracy are application of constraints for conservation of the number electrons and components of the dipole moment, optimization of the auxiliary basis set, and elimination of round-off errors in the matrix inversion. © 2012 American Institute of Physics

  15. Anomalous and non-Gaussian diffusion in Hertzian spheres

    NASA Astrophysics Data System (ADS)

    Ouyang, Wenze; Sun, Bin; Sun, Zhiwei; Xu, Shenghua

    2018-09-01

    By means of molecular dynamics simulations, we study the non-Gaussian diffusion in the fluid of Hertzian spheres. The time dependent non-Gaussian parameter, as an indicator of the dynamic heterogeneity, is increased with the increasing of temperature. When the temperature is high enough, the dynamic heterogeneity becomes very significant, and it seems counterintuitive that the maximum of non-Gaussian parameter and the position of its peak decrease monotonically with the increasing of density. By fitting the curves of self intermediate scattering function, we find that the character relaxation time τα is surprisingly not coupled with the time τmax where the non-Gaussian parameter reaches to a maximum. The intriguing features of non-Gaussian diffusion at high enough temperatures can be associated with the weakly correlated mean-field behavior of Hertzian spheres. Especially the time τmax is nearly inversely proportional to the density at extremely high temperatures.

  16. Addendum to foundations of multidimensional wave field signal theory: Gaussian source function

    NASA Astrophysics Data System (ADS)

    Baddour, Natalie

    2018-02-01

    Many important physical phenomena are described by wave or diffusion-wave type equations. Recent work has shown that a transform domain signal description from linear system theory can give meaningful insight to multi-dimensional wave fields. In N. Baddour [AIP Adv. 1, 022120 (2011)], certain results were derived that are mathematically useful for the inversion of multi-dimensional Fourier transforms, but more importantly provide useful insight into how source functions are related to the resulting wave field. In this short addendum to that work, it is shown that these results can be applied with a Gaussian source function, which is often useful for modelling various physical phenomena.

  17. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Z; Terry, N; Hubbard, S S

    2013-02-12

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  18. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.

    2013-02-22

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  19. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  20. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  1. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  2. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  3. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  4. Super-Gaussian laser intensity output formation by means of adaptive optics

    NASA Astrophysics Data System (ADS)

    Cherezova, T. Y.; Chesnokov, S. S.; Kaptsov, L. N.; Kudryashov, A. V.

    1998-10-01

    An optical resonator using an intracavity adaptive mirror with three concentric rings of controlling electrodes, which produc low loss and large beamwidth super-Gaussian output of order 4, 6, 8, is analyzed. An inverse propagation method is used to determine the appropriate shape of the adaptive mirror. The mirror reproduces the shape with minimal RMS error by combining weights of experimentally measured response functions of the mirror sample. The voltages applied to each mirror electrode are calculated. Practical design parameters such as construction of an adaptive mirror, Fresnel numbers, and geometric factor are discussed.

  5. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.

  6. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  7. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  8. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  9. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  10. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  11. Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.

    PubMed

    Minin, Serge; Kamalabadi, Farzad

    2009-12-20

    We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.

  12. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  13. On the validity of the dispersion model of hepatic drug elimination when intravascular transit time densities are long-tailed.

    PubMed

    Weiss, M; Stedtler, C; Roberts, M S

    1997-09-01

    The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.

  14. Establishment and application of the estimation model for pollutant concentrfation in agriculture drain

    NASA Astrophysics Data System (ADS)

    Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji

    2018-02-01

    It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.

  15. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  16. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  17. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  18. Offline handwritten word recognition using MQDF-HMMs

    NASA Astrophysics Data System (ADS)

    Ramachandrula, Sitaram; Hambarde, Mangesh; Patial, Ajay; Sahoo, Dushyant; Kochar, Shaivi

    2015-01-01

    We propose an improved HMM formulation for offline handwriting recognition (HWR). The main contribution of this work is using modified quadratic discriminant function (MQDF) [1] within HMM framework. In an MQDF-HMM the state observation likelihood is calculated by a weighted combination of MQDF likelihoods of individual Gaussians of GMM (Gaussian Mixture Model). The quadratic discriminant function (QDF) of a multivariate Gaussian can be rewritten by avoiding the inverse of covariance matrix by using the Eigen values and Eigen vectors of it. The MQDF is derived from QDF by substituting few of badly estimated lower-most Eigen values by an appropriate constant. The estimation errors of non-dominant Eigen vectors and Eigen values of covariance matrix for which the training data is insufficient can be controlled by this approach. MQDF has been successfully shown to improve the character recognition performance [1]. The usage of MQDF in HMM improves the computation, storage and modeling power of HMM when there is limited training data. We have got encouraging results on offline handwritten character (NIST database) and word recognition in English using MQDF HMMs.

  19. Bayesian modelling of the emission spectrum of the Joint European Torus Lithium Beam Emission Spectroscopy system.

    PubMed

    Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C

    2016-02-01

    A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.

  20. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  1. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    PubMed

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.

  2. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  3. Properties of a center/surround retinex. Part 2: Surround design

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Woodell, Glenn A.

    1995-01-01

    The last version of Edwin Land's retinex model for human vision's lightness and color constancy has been implemented. Previous research has established the mathematical foundations of Land's retinex but has not examined specific design issues and their effects on the properties of the retinex operation. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. Here we describe issues involved in designing the surround function. We find that there is a trade-off between rendition and dynamic range compression that is governed by the surround space constant. Various functional forms for the retinex surround are evaluated and a Gaussian form is found to perform better than the inverse square suggested by Land. Preliminary testing led to the design of a Gaussian surround with a space constant of 80 pixels as a reasonable compromise between dynamic range compression and rendition.

  4. Detailed noise statistics for an optically preamplified direct detection receiver

    NASA Astrophysics Data System (ADS)

    Danielsen, Soeren Lykke; Mikkelsen, Benny; Durhuus, Terji; Joergensen, Carsten; Stubkjaer, Kristian E.

    We describe the exact statistics of an optically preamplified direct detection receiver by means of the moment generating function. The theory allows an arbitrary shaped electrical filter in the receiver circuit. The moment generating function (MGF) allows for a precise calculation of the error rate by using the inverse Fast Fourier transform (FFT). The exact results are compared with the usual Gaussian approximation (GA), the saddlepoint approximation (SAP) and the modified Chernoff bound (MCB). This comparison shows that the noise is not Gaussian distributed for all values of the optical amplifier gain. In the region from 20-30 dB gain, calculations shows that the GA underestimates the receiver sensitivity while the SAP is very close to the results of our exact model. Using the MGF derived in the article we then find the optimal bandwidth of the electrical filter in the receiver circuit and calculate the sensitivity degradation due to inter symbol interference (ISI).

  5. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  6. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  7. Laguerre-Gaussian, Hermite-Gaussian, Bessel-Gaussian, and Finite-Energy Airy Beams Carrying Orbital Angular Momentum in Strongly Nonlocal Nonlinear Media

    NASA Astrophysics Data System (ADS)

    Wu, Zhenkun; Gu, Yuzong

    2016-12-01

    The propagation of two-dimensional beams is analytically and numerically investigated in strongly nonlocal nonlinear media (SNNM) based on the ABCD matrix. The two-dimensional beams reported in this paper are described by the product of the superposition of generalized Laguerre-Gaussian (LG), Hermite-Gaussian (HG), Bessel-Gaussian (BG), and circular Airy (CA) beams, carrying an orbital angular momentum (OAM). Owing to OAM and the modulation of SNNM, we find that the propagation of these two-dimensional beams exhibits complete rotation and periodic inversion: the spatial intensity profile first extends and then diminishes, and during the propagation the process repeats to form a breath-like phenomenon.

  8. Relativistic diffusive motion in random electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Haba, Z.

    2011-08-01

    We show that the relativistic dynamics in a Gaussian random electromagnetic field can be approximated by the relativistic diffusion of Schay and Dudley. Lorentz invariant dynamics in the proper time leads to the diffusion in the proper time. The dynamics in the laboratory time gives the diffusive transport equation corresponding to the Jüttner equilibrium at the inverse temperature β-1 = mc2. The diffusion constant is expressed by the field strength correlation function (Kubo's formula).

  9. Fractional Brownian motion time-changed by gamma and inverse gamma process

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.

    2017-02-01

    Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.

  10. Inversion of Magnetic Measurements of the CHAMP Satellite Over the Pannonian Basin

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, P. T.; Wittmann, G.; Toronyi, B.; Puszta, S.

    2011-01-01

    The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5 x 0.5, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude were downward continued to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We INTERPRET THAT the magnetic anomaly WAS produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.

  11. Pharmacokinetics of plasma enfuvirtide after subcutaneous administration to patients with human immunodeficiency virus: Inverse Gaussian density absorption and 2-compartment disposition.

    PubMed

    Zhang, Xiaoping; Nieforth, Keith; Lang, Jean-Marie; Rouzier-Panis, Regine; Reynes, Jacques; Dorr, Albert; Kolis, Stanley; Stiles, Mark R; Kinchelow, Tosca; Patel, Indravadan H

    2002-07-01

    Enfuvirtide (T-20) is the first of a novel class of human immunodeficiency virus (HIV) drugs that block gp41-mediated viral fusion to host cells. The objectives of this study were to develop a structural pharmacokinetic model that would adequately characterize the absorption and disposition of enfuvirtide pharmacokinetics after both intravenous and subcutaneous administration and to evaluate the dose proportionality of enfuvirtide pharmacokinetic parameters at a subcutaneous dose higher than that currently used in phase III studies. Twelve patients with HIV infection received 4 single doses of enfuvirtide separated by a 1-week washout period in an open-label, randomized, 4-way crossover fashion. The doses studied were 90 mg (intravenous) and 45 mg, 90 mg, and 180 mg (subcutaneous). Serial blood samples were collected up to 48 hours after each dose. Plasma enfuvirtide concentrations were measured with use of a validated liquid chromatography-tandem mass spectrometry method. Enfuvirtide plasma concentration-time data after subcutaneous administration were well described by an inverse Gaussian density function-input model linked to a 2-compartment open distribution model with first-order elimination from the central compartment. The model-derived mean pharmacokinetic parameters (+/-SD) were volume of distribution of the central compartment (3.8 +/- 0.8 L), volume of distribution of the peripheral compartment (1.7 +/- 0.6 L), total clearance (1.44 +/- 0.30 L/h), intercompartmental distribution (2.3 +/- 1.1 L/h), bioavailability (89% +/- 11%), and mean absorption time (7.26 hours, 8.65 hours, and 9.79 hours for the 45-mg, 90-mg, and 180-mg dose groups, respectively). The terminal half-life increased from 3.46 to 4.35 hours for the subcutaneous dose range from 45 to 180 mg. An inverse Gaussian density function-input model linked to a 2-compartment open distribution model with first-order elimination from the central compartment was appropriate to describe complex absorption and disposition kinetics of enfuvirtide plasma concentration-time data after subcutaneous administration to patients with HIV infection. Enfuvirtide was nearly completely absorbed from subcutaneous depot, and pharmacokinetic parameters were linear up to a dose of 180 mg in this study.

  12. Population pharmacokinetic modelling of tramadol using inverse Gaussian function for the assessment of drug absorption from prolonged and immediate release formulations.

    PubMed

    Brvar, Nina; Mateović-Rojnik, Tatjana; Grabnar, Iztok

    2014-10-01

    This study aimed to develop a population pharmacokinetic model for tramadol that combines different input rates with disposition characteristics. Data used for the analysis were pooled from two phase I bioavailability studies with immediate (IR) and prolonged release (PR) formulations in healthy volunteers. Tramadol plasma concentration-time data were described by an inverse Gaussian function to model the complete input process linked to a two-compartment disposition model with first-order elimination. Although polymorphic CYP2D6 appears to be a major enzyme involved in the metabolism of tramadol, application of a mixture model to test the assumption of two and three subpopulations did not reveal any improvement of the model. The final model estimated parameters with reasonable precision and was able to estimate the interindividual variability of all parameters except for the relative bioavailability of PR vs. IR formulation. Validity of the model was further tested using the nonparametric bootstrap approach. Finally, the model was applied to assess absorption kinetics of tramadol and predict steady-state pharmacokinetics following administration of both types of formulations. For both formulations, the final model yielded a stable estimate of the absorption time profiles. Steady-state simulation supports switching of patients from IR to PR formulation. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-06-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ( E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ( E, q) as found in optical spectra and ab initio calculations of aluminum.

  14. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-04-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ(E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ(E, q) as found in optical spectra and ab initio calculations of aluminum.

  15. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  16. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  17. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  18. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  19. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  20. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  1. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  2. Multifractal analysis with the probability density function at the three-dimensional anderson transition.

    PubMed

    Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A

    2009-03-13

    The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.

  3. Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio

    2018-04-01

    We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.

  4. Estimation of Phytoplankton Accessory Pigments From Hyperspectral Reflectance Spectra: Toward a Global Algorithm

    NASA Astrophysics Data System (ADS)

    Chase, A. P.; Boss, E.; Cetinić, I.; Slade, W.

    2017-12-01

    Phytoplankton community composition in the ocean is complex and highly variable over a wide range of space and time scales. Able to cover these scales, remote-sensing reflectance spectra can be measured both by satellite and by in situ radiometers. The spectral shape of reflectance in the open ocean is influenced by the particles in the water, mainly phytoplankton and covarying nonalgal particles. We investigate the utility of in situ hyperspectral remote-sensing reflectance measurements to detect phytoplankton pigments by using an inversion algorithm that defines phytoplankton pigment absorption as a sum of Gaussian functions. The inverted amplitudes of the Gaussian functions representing pigment absorption are compared to coincident High Performance Liquid Chromatography measurements of pigment concentration. We determined strong predictive capability for chlorophylls a, b, c1+c2, and the photoprotective carotenoids. We also tested the estimation of pigment concentrations from reflectance-derived chlorophyll a using global relationships of covariation between chlorophyll a and the accessory pigments. We found similar errors in pigment estimation based on the relationships of covariation versus the inversion algorithm. An investigation of spectral residuals in reflectance data after removal of chlorophyll-based average absorption spectra showed no strong relationship between spectral residuals and pigments. Ultimately, we are able to estimate concentrations of three chlorophylls and the photoprotective carotenoid pigments, noting that further work is necessary to address the challenge of extracting information from hyperspectral reflectance beyond the information that can be determined from chlorophyll a and its covariation with other pigments.

  5. Gaussian statistics for palaeomagnetic vectors

    USGS Publications Warehouse

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  6. Gaussian statistics for palaeomagnetic vectors

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  7. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  8. Log-amplitude statistics for Beck-Cohen superstatistics

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Konno, Hidetoshi

    2013-05-01

    As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.

  9. Fast method to compute scattering by a buried object under a randomly rough surface: PILE combined with FB-SA.

    PubMed

    Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas

    2008-04-01

    A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.

  10. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    PubMed

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical, inver-covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fMRI, MEG and EEG data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in MEG beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  11. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  12. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  13. Time since maximum of Brownian motion and asymmetric Lévy processes

    NASA Astrophysics Data System (ADS)

    Martin, R. J.; Kearney, M. J.

    2018-07-01

    Motivated by recent studies of record statistics in relation to strongly correlated time series, we consider explicitly the drawdown time of a Lévy process, which is defined as the time since it last achieved its running maximum when observed over a fixed time period . We show that the density function of this drawdown time, in the case of a completely asymmetric jump process, may be factored as a function of t multiplied by a function of T  ‑  t. This extends a known result for the case of pure Brownian motion. We state the factors explicitly for the cases of exponential down-jumps with drift, and for the downward inverse Gaussian Lévy process with drift.

  14. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  15. Activation rates for nonlinear stochastic flows driven by non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    van den Broeck, C.; Hänggi, P.

    1984-11-01

    Activation rates are calculated for stochastic bistable flows driven by asymmetric dichotomic Markov noise (a two-state Markov process). This noise contains as limits both a particular type of non-Gaussian white shot noise and white Gaussian noise. Apart from investigating the role of colored noise on the escape rates, one can thus also study the influence of the non-Gaussian nature of the noise on these rates. The rate for white shot noise differs in leading order (Arrhenius factor) from the corresponding rate for white Gaussian noise of equal strength. In evaluating the rates we demonstrate the advantage of using transport theory over a mean first-passage time approach for cases with generally non-white and non-Gaussian noise sources. For white shot noise with exponentially distributed weights we succeed in evaluating the mean first-passage time of the corresponding integro-differential master-equation dynamics. The rate is shown to coincide in the weak noise limit with the inverse mean first-passage time.

  16. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the data are contaminated by noise, the objective function values of the Direct and Interp are oscillating at the beginning of the inversion, whereas that of the Search decreases consistently.

  17. Magnetization in the South Pole-Aitken basin: Implications for the lunar dynamo and true polar wander

    DTIC Science & Technology

    2016-10-14

    We introduce new Monte Carlo methods to quantify errors in our inversions arising from Gaussian time-dependent changes in the external field and the...all study areas; Appendix A shows de- ails of magnetic inversions for all these areas (see Sections 2.3 and .4 ). Supplementary Appendix B shows maps...of the total field for ll available days that were considered, but not used. .3. Inversion algorithm 1: defined dipoles, constant magnetization DD

  18. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  19. Modified Gaussian influence function of deformable mirror actuators.

    PubMed

    Huang, Linhai; Rao, Changhui; Jiang, Wenhan

    2008-01-07

    A new deformable mirror influence function based on a Gaussian function is introduced to analyze the fitting capability of a deformable mirror. The modified expressions for both azimuthal and radial directions are presented based on the analysis of the residual error between a measured influence function and a Gaussian influence function. With a simplex search method, we further compare the fitting capability of our proposed influence function to fit the data produced by a Zygo interferometer with that of a Gaussian influence function. The result indicates that the modified Gaussian influence function provides much better performance in data fitting.

  20. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  1. The point-spread function of fiber-coupled area detectors

    PubMed Central

    Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.

    2012-01-01

    The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762

  2. A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence

    PubMed Central

    Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan

    2017-01-01

    The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206

  3. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  4. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis.

    PubMed

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the usefulness of the Sparse Gaussian Graphical models to reveal functional and structural connectivity patterns. This information provided by the sparse inverse covariance matrices is not only used in an exploratory way but we also propose a method to use it in a discriminative way. Regression coefficients are used to compute reconstruction errors for the different classes that are then introduced in a SVM for classification. Classification experiments performed using 68 Controls, 70 AD, and 111 MCI images and assessed by cross-validation show the effectiveness of the proposed method.

  5. Potentials of radial partially coherent beams in free-space optical communication: a numerical investigation.

    PubMed

    Wang, Minghao; Yuan, Xiuhua; Ma, Donglin

    2017-04-01

    Nonuniformly correlated partially coherent beams (PCBs) have extraordinary propagation properties, making it possible to further improve the performance of free-space optical communications. In this paper, a series of PCBs with varying degrees of coherence in the radial direction, academically called radial partially coherent beams (RPCBs), are considered. RPCBs with arbitrary coherence distributions can be created by adjusting the amplitude profile of a spatial modulation function imposed on a uniformly correlated phase screen. Since RPCBs cannot be well characterized by the coherence length, a modulation depth factor is introduced as an indicator of the overall distribution of coherence. By wave optics simulation, free-space and atmospheric propagation properties of RPCBs with (inverse) Gaussian and super-Gaussian coherence distributions are examined in comparison with conventional Gaussian Schell-model beams. Furthermore, the impacts of varying central coherent areas are studied. Simulation results reveal that under comparable overall coherence, beams with a highly coherent core and a less coherent margin exhibit a smaller beam spread and greater on-axis intensity, which is mainly due to the self-focusing phenomenon right after the beam exits the transmitter. Particularly, those RPCBs with super-Gaussian coherence distributions will repeatedly focus during propagation, resulting in even greater intensities. Additionally, RPCBs also have a considerable ability to reduce scintillation. And it is demonstrated that those properties have made RPCBs very effective in improving the mean signal-to-noise ratio of small optical receivers, especially in relatively short, weakly fluctuating links.

  6. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  8. Resonant activation in piecewise linear asymmetric potentials.

    PubMed

    Fiasconaro, Alessandro; Spagnolo, Bernardo

    2011-04-01

    This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon. The features of the asymmetry of the potential barrier arise by investigating the stochastic transitions far behind the potential maximum, from the initial well to the bottom of the adjacent potential well. Because of the asymmetry of the potential profile together with the random external force uniform in space, we find, for the different asymmetries: (1) an inversion of the curves of the mean first passage time in the resonant region of the correlation time τ of the dichotomous noise, for low thermal noise intensities; (2) a maximum of the mean velocity of the Brownian particle as a function of τ; and (3) an inversion of the curves of the mean velocity and a very weak current reversal in the miniratchet system obtained with the asymmetrical potential profiles investigated. An inversion of the mean first passage time curves is also observed by varying the amplitude of the dichotomous noise, behavior confirmed by recent experiments. ©2011 American Physical Society

  9. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  10. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  11. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  12. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  13. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  14. On the asymptotic evolution of finite energy Airy wave functions.

    PubMed

    Chamorro-Posada, P; Sánchez-Curto, J; Aceves, A B; McDonald, G S

    2015-06-15

    In general, there is an inverse relation between the degree of localization of a wave function of a certain class and its transform representation dictated by the scaling property of the Fourier transform. We report that in the case of finite energy Airy wave packets a simultaneous increase in their localization in the direct and transform domains can be obtained as the apodization parameter is varied. One consequence of this is that the far-field diffraction rate of a finite energy Airy beam decreases as the beam localization at the launch plane increases. We analyze the asymptotic properties of finite energy Airy wave functions using the stationary phase method. We obtain one dominant contribution to the long-term evolution that admits a Gaussian-like approximation, which displays the expected reduction of its broadening rate as the input localization is increased.

  15. A Cr4+:YAG passively Q-switched Nd:YVO4 microchip laser for controllable high-order Hermite-Gaussian modes

    NASA Astrophysics Data System (ADS)

    Dong, Jun; He, Yu; Bai, Sheng-Chuang; Ueda, Ken-ichi; Kaminskii, Alexander A.

    2016-09-01

    A nanosecond, high peak power, passively Q-switched laser for controllable Hermite-Gaussian (HG) modes has been achieved by manipulating the saturated inversion population inside the gain medium. The stable HG modes are generated in a Cr4+:YAG passively Q-switched Nd:YVO4 microchip laser by applying a tilted pump beam. The asymmetrical saturated inversion population distribution inside the Nd:YVO4 crystal for desirable HG modes is manipulated by choosing the proper pump beam diameter and varying pump power. A HG9,8 mode passively Q-switched Nd:YVO4 microchip laser with average output power of 265 mW has been obtained. Laser pulses with a pulse width of 7.3 ns and peak power of over 1.7 kW working at 21 kHz have been generated in the passively Q-switched Nd:YVO4 microchip laser.

  16. Probabilistic inversion with graph cuts: Application to the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Pirot, Guillaume; Linde, Niklas; Mariethoz, Grégoire; Bradford, John H.

    2017-02-01

    Inversion methods that build on multiple-point statistics tools offer the possibility to obtain model realizations that are not only in agreement with field data, but also with conceptual geological models that are represented by training images. A recent inversion approach based on patch-based geostatistical resimulation using graph cuts outperforms state-of-the-art multiple-point statistics methods when applied to synthetic inversion examples featuring continuous and discontinuous property fields. Applications of multiple-point statistics tools to field data are challenging due to inevitable discrepancies between actual subsurface structure and the assumptions made in deriving the training image. We introduce several amendments to the original graph cut inversion algorithm and present a first-ever field application by addressing porosity estimation at the Boise Hydrogeophysical Research Site, Boise, Idaho. We consider both a classical multi-Gaussian and an outcrop-based prior model (training image) that are in agreement with available porosity data. When conditioning to available crosshole ground-penetrating radar data using Markov chain Monte Carlo, we find that the posterior realizations honor overall both the characteristics of the prior models and the geophysical data. The porosity field is inverted jointly with the measurement error and the petrophysical parameters that link dielectric permittivity to porosity. Even though the multi-Gaussian prior model leads to posterior realizations with higher likelihoods, the outcrop-based prior model shows better convergence. In addition, it offers geologically more realistic posterior realizations and it better preserves the full porosity range of the prior.

  17. Aberration analysis and calculation in system of Gaussian beam illuminates lenslet array

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Zhou, Ping; Su, Tianquan; Feng, Yun; Zhao, Yuejin

    2014-09-01

    Low order aberration was founded when focused Gaussian beam imaging at Kodak KAI -16000 image detector, which is integrated with lenslet array. Effect of focused Gaussian beam and numerical simulation calculation of the aberration were presented in this paper. First, we set up a model of optical imaging system based on previous experiment. Focused Gaussian beam passed through a pinhole and was received by Kodak KAI -16000 image detector whose microlenses of lenslet array were exactly focused on sensor surface. Then, we illustrated the characteristics of focused Gaussian beam and the effect of relative space position relations between waist of Gaussian beam and front spherical surface of microlenses to the aberration. Finally, we analyzed the main element of low order aberration and calculated the spherical aberration caused by lenslet array according to the results of above two steps. Our theoretical calculations shown that , the numerical simulation had a good agreement with the experimental result. Our research results proved that spherical aberration was the main element and made up about 93.44% of the 48 nm error, which was demonstrated in previous experiment. The spherical aberration is inversely proportional to the value of divergence distance between microlens and waist, and directly proportional to the value of the Gaussian beam waist radius.

  18. Using the ARTMO toolbox for automated retrieval of biophysical parameters through radiative transfer model inversion: Optimizing LUT-based inversion

    NASA Astrophysics Data System (ADS)

    Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.

    2012-04-01

    Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset, which was acquired on the agricultural test site Barrax, Spain. LUTs were created using the 4SAIL and FLIGHT models and were inverted against CHRIS data in order to retrieve maps of chlorophyll content (chl) and leaf area index (LAI). The following inversion steps have been optimized: 1. Cost function. The performances of about 50 different cost functions (i.e. minimum distance functions) were compared. Remarkably, in none of the studied cases the widely used root mean square error (RMSE) led to most accurate results. Depending on the retrieved parameter, more successful functions were: 'Sharma and Mittal', 'Shannońs entropy', 'Hellinger distance', 'Pearsońs chi-square'. 2. Gaussian noise. Earth observation data typically encompass a certain degree of noise due to errors related to radiometric and geometric processing. In all cases, adding 5% Gaussian noise to the simulated spectra led to more accurate retrievals as compared to without noise. 3. Average of multiple best solutions. Because multiple parameter combinations may lead to the same spectra, a way to overcome this problem is not searching for the top best match but for a percentage of best matches. Optimized retrievals were encountered when including an average of 7% (Chl) to 10% (LAI) top best matches. 4. Integration of estimates. The option is provided to integrate estimates of biochemical contents at the canopy level (e.g., total chlorophyll: Chl × LAI, or water: Cw × LAI), which can lead to increased robustness and accuracy. 5. Class-based inversion. This option is probably ARTMÓs most powerful feature as it allows model parameterization depending on the imagés land cover classes (e.g. different soil or vegetation types). Class-based inversion can lead to considerably improved accuracies compared to one generic class. Results suggest that 4SAIL and FLIGHT performed alike for Chl but not for LAI. While both models rely on the leaf model PROSPECT for Chl retrieval, their different nature (e.g. numerical vs. ray tracing) may cause that retrieval of structural parameters such as LAI differ. Finally, it should be noted that the whole analysis can be intuitively performed by the toolbox. ARTMO is freely available to the EO community for further development. Expressions of interest are welcome and should be directed to the corresponding author.

  19. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  20. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    NASA Astrophysics Data System (ADS)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of gravity and galvanometric resistivity data. For this 2D synthetic example, we note that the position of the facies are well-recovered except far from the ground surfce where the sensitivity is too low. The figure shows the evolution of the shape of the facies during the inversion itertion by iteration.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagesh, S Setlur; Rana, R; Russ, M

    Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less

  2. Husbandry Emissions Estimation: Fusion of Mobile Surface and Airborne Remote Sensing and Mobile Surface In Situ Measurements

    NASA Astrophysics Data System (ADS)

    Leifer, I.; Hall, J. L.; Melton, C.; Tratt, D. M.; Chang, C. S.; Buckland, K. N.; Frash, J.; Leen, J. B.; Van Damme, M.; Clarisse, L.

    2017-12-01

    Emissions of methane and ammonia from intensive animal husbandry are important drivers of climate and photochemical and aerosol pollution. Husbandry emission estimates are somewhat uncertain because of their dependence on practices, temperature, micro-climate, and other factors, leading to variations in emission factors up to an order-of-magnitude. Mobile in situ measurements are increasingly being applied to derive trace gas emissions by Gaussian plume inversion; however, inversion with incomplete information can lead to erroneous emissions and incorrect source location. Mobile in situ concentration and wind data and mobile remote sensing column data from the Chino Dairy Complex in the Los Angeles Basin were collected near simultaneously (within 1-10 s, depending on speed) while transecting plumes, approximately orthogonal to winds. This analysis included airborne remote sensing trace gas information. MISTIR collected vertical column FTIR data simultaneously with in situ concentration data acquired by the AMOG-Surveyor while both vehicles traveled in convoy. The column measurements are insensitive to the turbulence characterization needed in Gaussian plume inversion of concentration data and thus provide a flux reference for evaluating in situ data inversions. Four different approaches were used on inversions for a single dairy, and also for the aggregate dairy complex plume. Approaches were based on differing levels of "knowledge" used in the inversion from solely the in situ platform and a single gas to a combination of information from all platforms and multiple gases. Derived dairy complex fluxes differed significantly from those estimated by other studies of the Chino complex. Analysis of long term satellite data showed that this most likely results from seasonality effects, highlighting the pitfalls of applying annualized extensions of flux measurements to a single campaign instantiation.

  3. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  4. Eigenvectors phase correction in inverse modal problem

    NASA Astrophysics Data System (ADS)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  5. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  6. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Blind channel estimation and deconvolution in colored noise using higher-order cumulants

    NASA Astrophysics Data System (ADS)

    Tugnait, Jitendra K.; Gummadavelli, Uma

    1994-10-01

    Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.

  8. A high repetition rate passively Q-switched microchip laser for controllable transverse laser modes

    NASA Astrophysics Data System (ADS)

    Dong, Jun; Bai, Sheng-Chuang; Liu, Sheng-Hui; Ueda, Ken-Ichi; Kaminskii, Alexander A.

    2016-05-01

    A Cr4+:YAG passively Q-switched Nd:YVO4 microchip laser for versatile controllable transverse laser modes has been demonstrated by adjusting the position of the Nd:YVO4 crystal along the tilted pump beam direction. The pump beam diameter-dependent asymmetric saturated inversion population inside the Nd:YVO4 crystal governs the oscillation of various Laguerre-Gaussian, Ince-Gaussian and Hermite-Gaussian modes. Controllable transverse laser modes with repetition rates over 25 kHz and up to 183 kHz, depending on the position of the Nd:YVO4 crystal, have been achieved. The controllable transverse laser beams with a nanosecond pulse width and peak power over hundreds of watts have been obtained for potential applications in optical trapping and quantum computation.

  9. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  10. Multiple scattering and the density distribution of a Cs MOT.

    PubMed

    Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J

    2005-11-28

    Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.

  11. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  12. Measuring Greenland Ice Mass Variation With Gravity Recovery and the Climate Experiment Gravity and GPS

    NASA Technical Reports Server (NTRS)

    Wu, Xiao-Ping

    1999-01-01

    The response of the Greenland ice sheet to climate change could significantly alter sea level. The ice sheet was much thicker at the last glacial maximum. To gain insight into the global change process and the future trend, it is important to evaluate the ice mass variation as a function of time and space. The Gravity Recovery and Climate Experiment (GRACE) mission to fly in 2001 for 5 years will measure gravity changes associated with the current ice variation and the solid earth's response to past variations. Our objective is to assess the separability of different change sources, accuracy and resolution in the mass variation determination by the new gravity data and possible Global Positioning System (GPS) bedrock uplift measurements. We use a reference parameter state that follows a dynamic ice model for current mass variation and a variant of the Tushingham and Peltier ICE-3G deglaciation model for historical deglaciation. The current linear trend is also assumed to have started 5 kyr ago. The Earth model is fixed as preliminary reference Earth model (PREM) with four viscoelastic layers. A discrete Bayesian inverse algorithm is developed employing an isotropic Gaussian a priori covariance function over the ice sheet and time. We use data noise predicted by the University of Texas and JPL for major GRACE error sources. A 2 mm/yr uplift uncertainty is assumed for GPS occupation time of 5 years. We then carry out covariance analysis and inverse simulation using GRACE geoid coefficients up to degree 180 in conjunction with a number of GPS uplift rates. Present-day ice mass variation and historical deglaciation are solved simultaneously over 146 grids of roughly 110 km x 110 km and with 6 time increments of 3 kyr each, along with a common starting epoch of the current trend. For present-day ice thickness change, the covariance analysis using GRACE geoid data alone results in a root mean square (RMS) posterior root variance of 2.6 cm/yr, with fairly large a priori uncertainties in the parameters and a Gaussian correlation length of 350 km. Simulated inverse can successfully recover most features in the reference present-day change. The RMS difference between them over the grids is 2.8 cm/yr. The RMS difference becomes 1.1 cm/yr when both are averaged with a half Gaussian wavelength of 150 km. With a fixed Earth model, GRACE alone can separate the geoid signals due to past and current load fairly well. Shown are the reference geoid signatures of direct and elastic effects of the current trend, the viscoelastic effect of the same trend starting from 5 kyr ago, the Post Glacial Rebound (PGR), and the predicted GRACE geoid error. The difference between the reference and inverse modeled total viscoelastic signatures is also shown. Although past and current ice mass variations are allowed the same spatial scale, their geoid signals have different spatial patterns. GPS data can contribute to the ice mass determination as well. Additional information is contained in the original.

  13. Determining the Diversity and Species Abundance Patterns in Arctic Soils using Rational Methods for Exploring Microbial Diversity

    NASA Astrophysics Data System (ADS)

    Ovreas, L.; Quince, C.; Sloan, W.; Lanzen, A.; Davenport, R.; Green, J.; Coulson, S.; Curtis, T.

    2012-12-01

    Arctic microbial soil communities are intrinsically interesting and poorly characterised. We have inferred the diversity and species abundance distribution of 6 Arctic soils: new and mature soil at the foot of a receding glacier, Arctic Semi Desert, the foot of bird cliffs and soil underlying Arctic Tundra Heath: all near Ny-Ålesund, Spitsbergen. Diversity, distribution and sample sizes were estimated using the rational method of Quince et al., (Isme Journal 2 2008:997-1006) to determine the most plausible underlying species abundance distribution. A log-normal species abundance curve was found to give a slightly better fit than an inverse Gaussian curve if, and only if, sequencing error was removed. The median estimates of diversity of operational taxonomic units (at the 3% level) were 3600-5600 (lognormal assumed) and 2825-4100 (inverse Gaussian assumed). The nature and origins of species abundance distributions are poorly understood but may yet be grasped by observing and analysing such distributions in the microbial world. The sample size required to observe the distribution (by sequencing 90% of the taxa) varied between ~ 106 and ~105 for the lognormal and inverse Gaussian respectively. We infer that between 5 and 50 GB of sequencing would be required to capture 90% or the metagenome. Though a principle components analysis clearly divided the sites into three groups there was a high (20-45%) degree of overlap in between locations irrespective of geographical proximity. Interestingly, the nearest relatives of the most abundant taxa at a number of most sites were of alpine or polar origin. Samples plotted on first two principal components together with arbitrary discriminatory OTUs

  14. The Gaussian-Lorentzian Sum, Product, and Convolution (Voigt) functions in the context of peak fitting X-ray photoelectron spectroscopy (XPS) narrow scans

    NASA Astrophysics Data System (ADS)

    Jain, Varun; Biesinger, Mark C.; Linford, Matthew R.

    2018-07-01

    X-ray photoelectron spectroscopy (XPS) is arguably the most important vacuum technique for surface chemical analysis, and peak fitting is an indispensable part of XPS data analysis. Functions that have been widely explored and used in XPS peak fitting include the Gaussian, Lorentzian, Gaussian-Lorentzian sum (GLS), Gaussian-Lorentzian product (GLP), and Voigt functions, where the Voigt function is a convolution of a Gaussian and a Lorentzian function. In this article we discuss these functions from a graphical perspective. Arguments based on convolution and the Central Limit Theorem are made to justify the use of functions that are intermediate between pure Gaussians and pure Lorentzians in XPS peak fitting. Mathematical forms for the GLS and GLP functions are presented with a mixing parameter m. Plots are shown for GLS and GLP functions with mixing parameters ranging from 0 to 1. There are fundamental differences between the GLS and GLP functions. The GLS function better follows the 'wings' of the Lorentzian, while these 'wings' are suppressed in the GLP. That is, these two functions are not interchangeable. The GLS and GLP functions are compared to the Voigt function, where the GLS is shown to be a decent approximation of it. Practically, both the GLS and the GLP functions can be useful for XPS peak fitting. Examples of the uses of these functions are provided herein.

  15. Thermal characteristics of second harmonic generation by phase matched calorimetry.

    PubMed

    Lim, Hwan Hong; Kurimura, Sunao; Noguchi, Keisuke; Shoji, Ichiro

    2014-07-28

    We analyze a solution of the heat equation for second harmonic generation (SHG) with a focused Gaussian beam and simulate the temperature rise in SHG materials as a function of the second harmonic power and the focusing conditions. We also propose a quantitative value of the heat removal performance of SHG devices, referred to as the effective heat capacity Cα in phase matched calorimetry. We demonstrate the inverse relation between Cα and the focusing parameter ξ, and propose the universal quantity of the product of Cα and ξ for characterizing the thermal property of SHG devices. Finally, we discuss the strategy to manage thermal dephasing in SHG using the results from simulations.

  16. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  17. Molecular surface mesh generation by filtering electron density map.

    PubMed

    Giard, Joachim; Macq, Benoît

    2010-01-01

    Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.

  18. Gaussian fitting for carotid and radial artery pressure waveforms: comparison between normal subjects and heart failure patients.

    PubMed

    Liu, Chengyu; Zheng, Dingchang; Zhao, Lina; Liu, Changchun

    2014-01-01

    It has been reported that Gaussian functions could accurately and reliably model both carotid and radial artery pressure waveforms (CAPW and RAPW). However, the physiological relevance of the characteristic features from the modeled Gaussian functions has been little investigated. This study thus aimed to determine characteristic features from the Gaussian functions and to make comparisons of them between normal subjects and heart failure patients. Fifty-six normal subjects and 51 patients with heart failure were studied with the CAPW and RAPW signals recorded simultaneously. The two signals were normalized first and then modeled by three positive Gaussian functions, with their peak amplitude, peak time, and half-width determined. Comparisons of these features were finally made between the two groups. Results indicated that the peak amplitude of the first Gaussian curve was significantly decreased in heart failure patients compared with normal subjects (P<0.001). Significantly increased peak amplitude of the second Gaussian curves (P<0.001) and significantly shortened peak times of the second and third Gaussian curves (both P<0.001) were also presented in heart failure patients. These results were true for both CAPW and RAPW signals, indicating the clinical significance of the Gaussian modeling, which should provide essential tools for further understanding the underlying physiological mechanisms of the artery pressure waveform.

  19. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  20. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States.

    PubMed

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-03

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  1. MAMAP - a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: retrieval algorithm and first inversions for point source emission rates

    NASA Astrophysics Data System (ADS)

    Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.

    2011-04-01

    MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.

  2. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  3. Hybrid modeling of spatial continuity for application to numerical inverse problems

    USGS Publications Warehouse

    Friedel, Michael J.; Iwashita, Fabio

    2013-01-01

    A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.

  4. Yes, the GIGP Really Does Work--And Is Workable!

    ERIC Educational Resources Information Center

    Burrell, Quentin L.; Fenton, Michael R.

    1993-01-01

    Discusses the generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling. Negative binomial distribution is discussed, construction of the GIGP process is explained, zero-truncated GIGP is considered, and applications of the process with journals, library circulation statistics, and database index terms are described. (50…

  5. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. Finite‐fault Bayesian inversion of teleseismic body waves

    USGS Publications Warehouse

    Clayton, Brandon; Hartzell, Stephen; Moschetti, Morgan P.; Minson, Sarah E.

    2017-01-01

    Inverting geophysical data has provided fundamental information about the behavior of earthquake rupture. However, inferring kinematic source model parameters for finite‐fault ruptures is an intrinsically underdetermined problem (the problem of nonuniqueness), because we are restricted to finite noisy observations. Although many studies use least‐squares techniques to make the finite‐fault problem tractable, these methods generally lack the ability to apply non‐Gaussian error analysis and the imposition of nonlinear constraints. However, the Bayesian approach can be employed to find a Gaussian or non‐Gaussian distribution of all probable model parameters, while utilizing nonlinear constraints. We present case studies to quantify the resolving power and associated uncertainties using only teleseismic body waves in a Bayesian framework to infer the slip history for a synthetic case and two earthquakes: the 2011 Mw 7.1 Van, east Turkey, earthquake and the 2010 Mw 7.2 El Mayor–Cucapah, Baja California, earthquake. In implementing the Bayesian method, we further present two distinct solutions to investigate the uncertainties by performing the inversion with and without velocity structure perturbations. We find that the posterior ensemble becomes broader when including velocity structure variability and introduces a spatial smearing of slip. Using the Bayesian framework solely on teleseismic body waves, we find rake is poorly constrained by the observations and rise time is poorly resolved when slip amplitude is low.

  7. Testing for the Gaussian nature of cosmological density perturbations through the three-point temperature correlation function

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1993-01-01

    One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.

  8. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  9. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  10. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  11. Curvature and bottlenecks control molecular transport in inverse bicontinuous cubic phases

    NASA Astrophysics Data System (ADS)

    Assenza, Salvatore; Mezzenga, Raffaele

    2018-02-01

    We perform a simulation study of the diffusion of small solutes in the confined domains imposed by inverse bicontinuous cubic phases for the primitive, diamond, and gyroid symmetries common to many lipid/water mesophase systems employed in experiments. For large diffusing domains, the long-time diffusion coefficient shows universal features when the size of the confining domain is renormalized by the Gaussian curvature of the triply periodic minimal surface. When bottlenecks are widely present, they become the most relevant factor for transport, regardless of the connectivity of the cubic phase.

  12. Broadband optical frequency comb generator based on driving N-cascaded modulators by Gaussian-shaped waveform

    NASA Astrophysics Data System (ADS)

    Hmood, Jassim K.; Harun, Sulaiman W.

    2018-05-01

    A new approach for realizing a wideband optical frequency comb (OFC) generator based on driving cascaded modulators by a Gaussian-shaped waveform, is proposed and numerically demonstrated. The setup includes N-cascaded MZMs, a single Gaussian-shaped waveform generator, and N-1 electrical time delayer. The first MZM is driven directly by a Gaussian-shaped waveform, while delayed replicas of the Gaussian-shaped waveform drive the other MZMs. An analytical model that describes the proposed OFC generator is provided to study the effect of number and chirp factor of cascaded MZM as well as pulse width on output spectrum. Optical frequency combs at frequency spacing of 1 GHz are generated by applying Gaussian-shaped waveform at pulse widths ranging from 200 to 400 ps. Our results reveal that, the number of comb lines is inversely proportional to the pulse width and directly proportional to both number and chirp factor of cascaded MZMs. At pulse width of 200 ps and chirp factor of 4, 67 frequency lines can be measured at output spectrum of two-cascaded MZMs setup. Whereas, increasing the number of cascaded stages to 3, 4, and 5, the optical spectra counts 89, 109 and 123 frequency lines; respectively. When the delay time is optimized, 61 comb lines can be achieved with power fluctuations of less than 1 dB for five-cascaded MZMs setup.

  13. 3D joint inversion modeling of the lithospheric density structure based on gravity, geoid and topography data — Application to the Alborz Mountains (Iran) and South Caspian Basin region

    NASA Astrophysics Data System (ADS)

    Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid

    2013-02-01

    We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.

  14. Model for non-Gaussian intraday stock returns

    NASA Astrophysics Data System (ADS)

    Gerig, Austin; Vicente, Javier; Fuentes, Miguel A.

    2009-12-01

    Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.

  15. Tuning Fractures With Dynamic Data

    NASA Astrophysics Data System (ADS)

    Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao

    2018-02-01

    Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.

  16. A fractional Fourier transform analysis of the scattering of ultrasonic waves.

    PubMed

    Tant, Katherine M M; Mulholland, Anthony J; Langer, Matthias; Gachagan, Anthony

    2015-03-08

    Many safety critical structures, such as those found in nuclear plants, oil pipelines and in the aerospace industry, rely on key components that are constructed from heterogeneous materials. Ultrasonic non-destructive testing (NDT) uses high-frequency mechanical waves to inspect these parts, ensuring they operate reliably without compromising their integrity. It is possible to employ mathematical models to develop a deeper understanding of the acquired ultrasonic data and enhance defect imaging algorithms. In this paper, a model for the scattering of ultrasonic waves by a crack is derived in the time-frequency domain. The fractional Fourier transform (FrFT) is applied to an inhomogeneous wave equation where the forcing function is prescribed as a linear chirp, modulated by a Gaussian envelope. The homogeneous solution is found via the Born approximation which encapsulates information regarding the flaw geometry. The inhomogeneous solution is obtained via the inverse Fourier transform of a Gaussian-windowed linear chirp excitation. It is observed that, although the scattering profile of the flaw does not change, it is amplified. Thus, the theory demonstrates the enhanced signal-to-noise ratio permitted by the use of coded excitation, as well as establishing a time-frequency domain framework to assist in flaw identification and classification.

  17. Random medium model for cusping of plane waves.

    PubMed

    Li, Jia; Korotkova, Olga

    2017-09-01

    We introduce a model for a three-dimensional (3D) Schell-type stationary medium whose degree of potential's correlation satisfies the Fractional Multi-Gaussian (FMG) function. Compared with the scattered profile produced by the Gaussian Schell-model (GSM) medium, the Fractional Multi-Gaussian Schell-model (FMGSM) medium gives rise to a sharp concave intensity apex in the scattered field. This implies that the FMGSM medium also accounts for a larger than Gaussian's power in the bucket (PIB) in the forward scattering direction, hence being a better candidate than the GSM medium for generating highly-focused (cusp-like) scattered profiles in the far zone. Compared to other mathematical models for the medium's correlation function which can produce similar cusped scattered profiles the FMG function offers unprecedented tractability being the weighted superposition of Gaussian functions. Our results provide useful applications to energy counter problems and particle manipulation by weakly scattered fields.

  18. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    NASA Astrophysics Data System (ADS)

    Song, Jong-Won; Hirao, Kimihiko

    2015-07-01

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.

  19. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    2015-07-14

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less

  20. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond.

    PubMed

    Perdikaris, Paris; Karniadakis, George Em

    2016-05-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).

  1. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond

    PubMed Central

    Perdikaris, Paris; Karniadakis, George Em

    2016-01-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less

  3. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  4. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

  5. Interpretation of the Total Magnetic Field Anomalies Measured by the CHAMP Satellite Over a Part of Europe and the Pannonian Basin

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Toronyi, B.; Puszta, S.

    2012-01-01

    In this study we interpret the magnetic anomalies at satellite altitude over a part of Europe and the Pannonian Basin. These anomalies are derived from the total magnetic measurements from the CHAMP satellite. The anomalies reduced to an elevation of 324 km. An inversion method is used to interpret the total magnetic anomalies over the Pannonian Basin. A three dimensional triangular model is used in the inversion. Two parameter distributions: Laplacian and Gaussian are investigated. The regularized inversion is numerically calculated with the Simplex and Simulated Annealing methods and the anomalous source is located in the upper crust. A probable source of the magnetization is due to the exsolution of the hematite-ilmenite minerals.

  6. Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.

    PubMed

    Sun, Dong; Zhao, Daomu

    2005-08-01

    By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.

  7. Stochastic resonance in a piecewise nonlinear model driven by multiplicative non-Gaussian noise and additive white noise

    NASA Astrophysics Data System (ADS)

    Guo, Yongfeng; Shen, Yajun; Tan, Jianguo

    2016-09-01

    The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.

  8. The Harmonic Oscillator with a Gaussian Perturbation: Evaluation of the Integrals and Example Applications

    ERIC Educational Resources Information Center

    Earl, Boyd L.

    2008-01-01

    A general result for the integrals of the Gaussian function over the harmonic oscillator wavefunctions is derived using generating functions. Using this result, an example problem of a harmonic oscillator with various Gaussian perturbations is explored in order to compare the results of precise numerical solution, the variational method, and…

  9. Geometric MCMC for infinite-dimensional inverse problems

    NASA Astrophysics Data System (ADS)

    Beskos, Alexandros; Girolami, Mark; Lan, Shiwei; Farrell, Patrick E.; Stuart, Andrew M.

    2017-04-01

    Bayesian inverse problems often involve sampling posterior distributions on infinite-dimensional function spaces. Traditional Markov chain Monte Carlo (MCMC) algorithms are characterized by deteriorating mixing times upon mesh-refinement, when the finite-dimensional approximations become more accurate. Such methods are typically forced to reduce step-sizes as the discretization gets finer, and thus are expensive as a function of dimension. Recently, a new class of MCMC methods with mesh-independent convergence times has emerged. However, few of them take into account the geometry of the posterior informed by the data. At the same time, recently developed geometric MCMC algorithms have been found to be powerful in exploring complicated distributions that deviate significantly from elliptic Gaussian laws, but are in general computationally intractable for models defined in infinite dimensions. In this work, we combine geometric methods on a finite-dimensional subspace with mesh-independent infinite-dimensional approaches. Our objective is to speed up MCMC mixing times, without significantly increasing the computational cost per step (for instance, in comparison with the vanilla preconditioned Crank-Nicolson (pCN) method). This is achieved by using ideas from geometric MCMC to probe the complex structure of an intrinsic finite-dimensional subspace where most data information concentrates, while retaining robust mixing times as the dimension grows by using pCN-like methods in the complementary subspace. The resulting algorithms are demonstrated in the context of three challenging inverse problems arising in subsurface flow, heat conduction and incompressible flow control. The algorithms exhibit up to two orders of magnitude improvement in sampling efficiency when compared with the pCN method.

  10. Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams

    NASA Technical Reports Server (NTRS)

    Steely, Sidney L.

    1993-01-01

    The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.

  11. The adaptive parallel UKF inversion method for the shape of space objects based on the ground-based photometric data

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Liu, Hao

    2018-04-01

    The space object in highly elliptical orbit is always presented as an image point on the ground-based imaging equipment so that it is difficult to resolve and identify the shape and attitude directly. In this paper a novel algorithm is presented for the estimation of spacecraft shape. The apparent magnitude model suitable for the inversion of object information such as shape and attitude is established based on the analysis of photometric characteristics. A parallel adaptive shape inversion algorithm based on UKF was designed after the achievement of dynamic equation of the nonlinear, Gaussian system involved with the influence of various dragging forces. The result of a simulation study demonstrate the viability and robustness of the new filter and its fast convergence rate. It realizes the inversion of combination shape with high accuracy, especially for the bus of cube and cylinder. Even though with sparse photometric data, it still can maintain a higher success rate of inversion.

  12. The effect of noise and lipid signals on determination of Gaussian and non-Gaussian diffusion parameters in skeletal muscle.

    PubMed

    Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G

    2017-07-01

    This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  13. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  14. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  15. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  16. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  17. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  18. Empirical models for fitting of oral concentration time curves with and without an intravenous reference.

    PubMed

    Weiss, Michael

    2017-06-01

    Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.

  19. Ince-Gaussian series representation of the two-dimensional fractional Fourier transform.

    PubMed

    Bandres, Miguel A; Gutiérrez-Vega, Julio C

    2005-03-01

    We introduce the Ince-Gaussian series representation of the two-dimensional fractional Fourier transform in elliptical coordinates. A physical interpretation is provided in terms of field propagation in quadratic graded-index media whose eigenmodes in elliptical coordinates are derived for the first time to our knowledge. The kernel of the new series representation is expressed in terms of Ince-Gaussian functions. The equivalence among the Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian series representations is verified by establishing the relation among the three definitions.

  20. An empirical model for dissolution profile and its application to floating dosage forms.

    PubMed

    Weiss, Michael; Kriangkrai, Worawut; Sungthongjeen, Srisagul

    2014-06-02

    A sum of two inverse Gaussian functions is proposed as a highly flexible empirical model for fitting of in vitro dissolution profiles. The model was applied to quantitatively describe theophylline release from effervescent multi-layer coated floating tablets containing different amounts of the anti-tacking agents talc or glyceryl monostearate. Model parameters were estimated by nonlinear regression (mixed-effects modeling). The estimated parameters were used to determine the mean dissolution time, as well as to reconstruct the time course of release rate for each formulation, whereby the fractional release rate can serve as a diagnostic tool for classification of dissolution processes. The approach allows quantification of dissolution behavior and could provide additional insights into the underlying processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Image contrast enhancement based on a local standard deviation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-12-31

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details aremore » concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.« less

  2. Elegant Ince-Gaussian beams in a quadratic-index medium

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2011-09-01

    Elegant Ince—Gaussian beams, which are the exact solutions of the paraxial wave equation in a quadratic-index medium, are derived in elliptical coordinates. These kinds of beams are the alternative form of standard Ince—Gaussian beams and they display better symmetry between the Ince-polynomials and the Gaussian function in mathematics. The transverse intensity distribution and the phase of the elegant Ince—Gaussian beams are discussed.

  3. Kalman-filtered compressive sensing for high resolution estimation of anthropogenic greenhouse gas emissions from sparse measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia

    2013-09-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  5. Novel transform for image description and compression with implementation by neural architectures

    NASA Astrophysics Data System (ADS)

    Ben-Arie, Jezekiel; Rao, Raghunath K.

    1991-10-01

    A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.

  6. Development and evaluation of an automatically adjusting coarse-grained force field for a β-O-4 type lignin from atomistic simulations

    NASA Astrophysics Data System (ADS)

    Li, Wenzhuo; Zhao, Yingying; Huang, Shuaiyu; Zhang, Song; Zhang, Lin

    2017-01-01

    This goal of this work was to develop a coarse-grained (CG) model of a β-O-4 type lignin polymer, because of the time consuming process required to achieve equilibrium for its atomistic model. The automatic adjustment method was used to develop the lignin CG model, which enables easy discrimination between chemically-varied polymers. In the process of building the lignin CG model, a sum of n Gaussian functions was obtained by an approximation of the corresponding atomistic potentials derived from a simple Boltzmann inversion of the distributions of the structural parameters. This allowed the establishment of the potential functions of the CG bond stretching and angular bending. To obtain the potential function of the CG dihedral angle, an algorithm similar to a Fourier progression form was employed together with a nonlinear curve-fitting method. The numerical potentials of the nonbonded portion of the lignin CG model were obtained using a potential inversion iterative method derived from the corresponding atomistic nonbonded distributions. The study results showed that the proposed CG model of lignin agreed well with its atomistic model in terms of the distributions of bond lengths, bending angles, dihedral angles and nonbonded distances between the CG beads. The lignin CG model also reproduced the static and dynamic properties of the atomistic model. The results of the comparative evaluation of the two models suggested that the designed lignin CG model was efficient and reliable.

  7. Non-Gaussian noise-weakened stability in a foraging colony system with time delay

    NASA Astrophysics Data System (ADS)

    Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong

    2018-02-01

    In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.

  8. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  9. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  10. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals.

    PubMed

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-21

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  11. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals

    NASA Astrophysics Data System (ADS)

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-01

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  12. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  13. Increasing the efficiency and accuracy of time-resolved electronic spectra calculations with on-the-fly ab initio quantum dynamics methods

    NASA Astrophysics Data System (ADS)

    Vanicek, Jiri

    2014-03-01

    Rigorous quantum-mechanical calculations of coherent ultrafast electronic spectra remain difficult. I will present several approaches developed in our group that increase the efficiency and accuracy of such calculations: First, we justified the feasibility of evaluating time-resolved spectra of large systems by proving that the number of trajectories needed for convergence of the semiclassical dephasing representation/phase averaging is independent of dimensionality. Recently, we further accelerated this approximation with a cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. The accuracy of potential energy surfaces was increased by combining the dephasing representation with accurate on-the-fly ab initio electronic structure calculations, including nonadiabatic and spin-orbit couplings. Finally, the inherent semiclassical approximation was removed in the exact quantum Gaussian dephasing representation, in which semiclassical trajectories are replaced by communicating frozen Gaussian basis functions evolving classically with an average Hamiltonian. Among other examples I will present an on-the-fly ab initio semiclassical dynamics calculation of the dispersed time-resolved stimulated emission spectrum of the 54-dimensional azulene. This research was supported by EPFL and by the Swiss National Science Foundation NCCR MUST (Molecular Ultrafast Science and Technology) and Grant No. 200021124936/1.

  14. A fractional Fourier transform analysis of the scattering of ultrasonic waves

    PubMed Central

    Tant, Katherine M.M.; Mulholland, Anthony J.; Langer, Matthias; Gachagan, Anthony

    2015-01-01

    Many safety critical structures, such as those found in nuclear plants, oil pipelines and in the aerospace industry, rely on key components that are constructed from heterogeneous materials. Ultrasonic non-destructive testing (NDT) uses high-frequency mechanical waves to inspect these parts, ensuring they operate reliably without compromising their integrity. It is possible to employ mathematical models to develop a deeper understanding of the acquired ultrasonic data and enhance defect imaging algorithms. In this paper, a model for the scattering of ultrasonic waves by a crack is derived in the time–frequency domain. The fractional Fourier transform (FrFT) is applied to an inhomogeneous wave equation where the forcing function is prescribed as a linear chirp, modulated by a Gaussian envelope. The homogeneous solution is found via the Born approximation which encapsulates information regarding the flaw geometry. The inhomogeneous solution is obtained via the inverse Fourier transform of a Gaussian-windowed linear chirp excitation. It is observed that, although the scattering profile of the flaw does not change, it is amplified. Thus, the theory demonstrates the enhanced signal-to-noise ratio permitted by the use of coded excitation, as well as establishing a time–frequency domain framework to assist in flaw identification and classification. PMID:25792967

  15. Rightfulness of Summation Cut-Offs in the Albedo Problem with Gaussian Fluctuations of the Density of Scatterers

    NASA Astrophysics Data System (ADS)

    Selim, M. M.; Bezák, V.

    2003-06-01

    The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.

  16. From plane waves to local Gaussians for the simulation of correlated periodic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de

    2016-08-28

    We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less

  17. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  18. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    NASA Astrophysics Data System (ADS)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-01

    We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.

  19. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  20. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  1. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  2. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE PAGES

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-27

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  3. Surface defects evaluation system based on electromagnetic model simulation and inverse-recognition calibration method

    NASA Astrophysics Data System (ADS)

    Yang, Yongying; Chai, Huiting; Li, Chen; Zhang, Yihui; Wu, Fan; Bai, Jian; Shen, Yibing

    2017-05-01

    Digitized evaluation of micro sparse defects on large fine optical surfaces is one of the challenges in the field of optical manufacturing and inspection. The surface defects evaluation system (SDES) for large fine optical surfaces is developed based on our previously reported work. In this paper, the electromagnetic simulation model based on Finite-Difference Time-Domain (FDTD) for vector diffraction theory is firstly established to study the law of microscopic scattering dark-field imaging. Given the aberration in actual optical systems, point spread function (PSF) approximated by a Gaussian function is introduced in the extrapolation from the near field to the far field and the scatter intensity distribution in the image plane is deduced. Analysis shows that both diffraction-broadening imaging and geometrical imaging should be considered in precise size evaluation of defects. Thus, a novel inverse-recognition calibration method is put forward to avoid confusion caused by diffraction-broadening effect. The evaluation method is applied to quantitative evaluation of defects information. The evaluation results of samples of many materials by SDES are compared with those by OLYMPUS microscope to verify the micron-scale resolution and precision. The established system has been applied to inspect defects on large fine optical surfaces and can achieve defects inspection of surfaces as large as 850 mm×500 mm with the resolution of 0.5 μm.

  4. Correlations between jets and charged particles in PbPb and pp collisions at $$ \\sqrt{s_{\\mathrm{NN}}}=2.76 $$ TeV

    DOE PAGES

    Khachatryan, Vardan

    2016-02-23

    In this study, the quark-gluon plasma is studied via medium-induced changes to correlations between jets and charged particles in PbPb collisions compared to pp reference data. This analysis uses data sets from PbPb and pp collisions with integrated luminosities of 166 inverse microbarns and 5.3 inverse picobarns, respectively, collected atmore » $$ \\sqrt{s_{\\mathrm{NN}}}=2.76 $$ TeV. The angular distributions of charged particles are studied as a function of relative pseudorapidity (Δη) and relative azimuthal angle (ΔΦ) with respect to reconstructed jet directions. Charged particles are correlated with all jets with transverse momentum (p T) above 120 GeV, and with the leading and subleading jets (the highest and second-highest in p T, respectively) in a selection of back-to-back dijet events. Modifications in PbPb data relative to pp reference data are characterized as a function of PbPb collision centrality and charged particle p T. A centrality-dependent excess of low-p T particles is present for all jets studied, and is most pronounced in the most central events. This excess of low-p T particles follows a Gaussian-like distribution around the jet axis, and extends to large relative angles of Δη ≈ 1 and ΔΦ ≈ 1.« less

  5. Generation of ultra-long pure magnetization needle and multiple spots by phase modulated doughnut Gaussian beam

    NASA Astrophysics Data System (ADS)

    Udhayakumar, M.; Prabakaran, K.; Rajesh, K. B.; Jaroszewicz, Z.; Belafhal, Abdelmajid; Velauthapillai, Dhayalan

    2018-06-01

    Based on vector diffraction theory and inverse Faraday effect (IFE), the light induced magnetization distribution of a tightly focused azimuthally polarized doughnut Gaussian beam superimposed with a helical phase and modulated by an optimized multi belt complex phase filter (MBCPF) is analysed numerically. It is noted that by adjusting the radii of different rings of the complex phase filter, one can achieve many novel magnetization focal distribution such as sub wavelength scale (0.29λ) and super long (52.2λ) longitudinal magnetic probe suitable for all optical magnetic recording and the formation of multiple magnetization chain with four, six and eight sub-wavelength spherical magnetization spots suitable for multiple trapping of magnetic particles are achieved.

  6. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  7. Theoretical investigation of gas-surface interactions

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.

    1990-01-01

    A Dirac-Hartree-Fock code was developed for polyatomic molecules. The program uses integrals over symmetry-adapted real spherical harmonic Gaussian basis functions generated by a modification of the MOLECULE integrals program. A single Gaussian function is used for the nuclear charge distribution, to ensure proper boundary conditions at the nuclei. The Gaussian primitive functions are chosen to satisfy the kinetic balance condition. However, contracted functions which do not necessarily satisfy this condition may be used. The Fock matrix is constructed in the scalar basis and transformed to a jj-coupled 2-spinor basis before diagonalization. The program was tested against numerical results for atoms with a Gaussian nucleus and diatomic molecules with point nuclei. The energies converge on the numerical values as the basis set size is increased. Full use of molecular symmetry (restricted to D sub 2h and subgroups) is yet to be implemented.

  8. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  9. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  10. Bayesian inference in geomagnetism

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  11. Data from fitting Gaussian process models to various data sets using eight Gaussian process software packages.

    PubMed

    Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M

    2018-06-01

    This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.

  12. Consistency relations for sharp inflationary non-Gaussian features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris

    If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming frommore » the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.« less

  13. Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2014-07-17

    Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.

  14. Feasibility study on the least square method for fitting non-Gaussian noise data

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  15. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  16. GAUSSIAN 76: An ab initio Molecular Orbital Program

    DOE R&D Accomplishments Database

    Binkley, J. S.; Whiteside, R.; Hariharan, P. C.; Seeger, R.; Hehre, W. J.; Lathan, W. A.; Newton, M. D.; Ditchfield, R.; Pople, J. A.

    1978-01-01

    Gaussian 76 is a general-purpose computer program for ab initio Hartree-Fock molecular orbital calculations. It can handle basis sets involving s, p and d-type Gaussian functions. Certain standard sets (STO-3G, 4-31G, 6-31G*, etc.) are stored internally for easy use. Closed shell (RHF) or unrestricted open shell (UHF) wave functions can be obtained. Facilities are provided for geometry optimization to potential minima and for limited potential surface scans.

  17. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  18. Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.

    DTIC Science & Technology

    1987-09-18

    breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.

  19. Explicitly-correlated Gaussian geminals in electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Szalewicz, Krzysztof; Jeziorski, Bogumił

    2010-11-01

    Explicitly correlated functions have been used since 1929, but initially only for two-electron systems. In 1960, Boys and Singer showed that if the correlating factor is of Gaussian form, many-electron integrals can be computed for general molecules. The capability of explicitly correlated Gaussian (ECG) functions to accurately describe many-electron atoms and molecules was demonstrated only in the early 1980s when Monkhorst, Zabolitzky and the present authors cast the many-body perturbation theory (MBPT) and coupled cluster (CC) equations as a system of integro-differential equations and developed techniques of solving these equations with two-electron ECG functions (Gaussian-type geminals, GTG). This work brought a new accuracy standard to MBPT/CC calculations. In 1985, Kutzelnigg suggested that the linear r 12 correlating factor can also be employed if n-electron integrals, n > 2, are factorised with the resolution of identity. Later, this factor was replaced by more general functions f (r 12), most often by ? , usually represented as linear combinations of Gaussian functions which makes the resulting approach (called F12) a special case of the original GTG expansion. The current state-of-art is that, for few-electron molecules, ECGs provide more accurate results than any other basis available, but for larger systems the F12 approach is the method of choice, giving significant improvements over orbital calculations.

  20. exocartographer: Constraining surface maps orbital parameters of exoplanets

    NASA Astrophysics Data System (ADS)

    Farr, Ben; Farr, Will M.; Cowan, Nicolas B.; Haggard, Hal M.; Robinson, Tyler

    2018-05-01

    exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

  1. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  2. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  3. Coherent superposition of propagation-invariant laser beams

    NASA Astrophysics Data System (ADS)

    Soskind, R.; Soskind, M.; Soskind, Y. G.

    2012-10-01

    The coherent superposition of propagation-invariant laser beams represents an important beam-shaping technique, and results in new beam shapes which retain the unique property of propagation invariance. Propagation-invariant laser beam shapes depend on the order of the propagating beam, and include Hermite-Gaussian and Laguerre-Gaussian beams, as well as the recently introduced Ince-Gaussian beams which additionally depend on the beam ellipticity parameter. While the superposition of Hermite-Gaussian and Laguerre-Gaussian beams has been discussed in the past, the coherent superposition of Ince-Gaussian laser beams has not received significant attention in literature. In this paper, we present the formation of propagation-invariant laser beams based on the coherent superposition of Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian beams of different orders. We also show the resulting field distributions of the superimposed Ince-Gaussian laser beams as a function of the ellipticity parameter. By changing the beam ellipticity parameter, we compare the various shapes of the superimposed propagation-invariant laser beams transitioning from Laguerre-Gaussian beams at one ellipticity extreme to Hermite-Gaussian beams at the other extreme.

  4. Transient Calibration of a Variably-Saturated Groundwater Flow Model By Iterative Ensemble Smoothering: Synthetic Case and Application to the Flow Induced During Shaft Excavation and Operation of the Bure Underground Research Laboratory

    NASA Astrophysics Data System (ADS)

    Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.

    2017-12-01

    The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.

  5. A Separable Insertion Method to Calculate Atomic and Molecular Resonances on a FE-DVR Grid using Exterior Complex Scaling

    NASA Astrophysics Data System (ADS)

    Abeln, Brant Anthony

    The study of metastable electronic resonances, anion or neutral states of finite lifetime, in molecules is an important area of research where currently no theoretical technique is generally applicable. The role of theory is to calculate both the position and width, which is proportional to the inverse of the lifetime, of these resonances and how they vary with respect to nuclear geometry in order to generate potential energy surfaces. These surfaces are the basis of time-dependent models of the molecular dynamics where the system moves towards vibrational excitation or fragmentation. Three fundamental electronic processes that can be modeled this way are dissociative electronic attachment, vibrational excitation through electronic impact and autoionization. Currently, experimental investigation into these processes is being preformed on polyatomic molecules while theoreticians continue their fifty-year-old search for robust methods to calculate them. The separable insertion method, investigated in this thesis, seeks to tackle the problem of calculating metastable resonances by using existing quantum chemistry tools along with a grid-based method employing exterior complex scaling (ECS). Modern quantum chemistry methods are extremely efficient at calculating ground and (bound) excited electronic states of atoms and molecules by utilizing Gaussian basis functions. These functions provide both a numerically fast and analytic solution to the necessary two-electron, six-dimensional integrals required in structure calculations. However, these computer programs, based on analytic Gaussian basis sets, cannot construct solutions that are not square-integrable, such as resonance wavefunctions. ECS, on the other hand, can formally calculate resonance solutions by rotating the asymptotic electronic coordinates into the complex plane. The complex Siegert energies for resonances, Eres = ER - iGamma/2 where ER is the real-valued position of the resonance and Gamma is the width of the resonance, can be found directly as an isolated pole in the complex energy plane. Unlike the straight complex scaling, ECS on the electronic coordinates overcomes the non-analytic behavior of the nuclear attraction potential, as a function of complex [special characters omitted] where the sum is over each nucleus in a molecular system. Discouragingly, the Gaussian basis functions, which are computationally well-suited for bound electronic structure, fail at forming an effective basis set for ECS due to the derivative discontinuity generated by the complex coordinate rotation and the piecewise defined contour. This thesis seeks to explore methods for implementing ECS indirectly without losing the numerical simplicity and power of Gaussian basis sets. The separable insertion method takes advantage of existing software by constructing a N2-term separable potential of the target system using Gaussian functions to be inserted into a finite-element discrete variable representation (FE-DVR) grid that implements ECS. This work reports an exhaustive investigation into this approach for calculating resonances. This thesis shows that this technique is successful at describing an anion shape resonance of a closed-shell atom or molecule in the static-exchange approximation. This method is applied to the 2P Be-, 2pig N2- and 2pi u CO2- shape resonances to calculate their complex Seigert energies. Additionally, many details on the exact construction of the separable potential and of the expansion basis are explored. The future work considers methods for faster convergence of the resonance energy, moving beyond the static-exchange approximation and applying this technique to polyatomic systems of interest.

  6. Decoupling of rotational and translational diffusion in supercooled colloidal fluids

    PubMed Central

    Edmond, Kazem V.; Elsesser, Mark T.; Hunter, Gary L.; Pine, David J.; Weeks, Eric R.

    2012-01-01

    We use confocal microscopy to directly observe 3D translational and rotational diffusion of tetrahedral clusters, which serve as tracers in colloidal supercooled fluids. We find that as the colloidal glass transition is approached, translational and rotational diffusion decouple from each other: Rotational diffusion remains inversely proportional to the growing viscosity whereas translational diffusion does not, decreasing by a much lesser extent. We quantify the rotational motion with two distinct methods, finding agreement between these methods, in contrast with recent simulation results. The decoupling coincides with the emergence of non-Gaussian displacement distributions for translation whereas rotational displacement distributions remain Gaussian. Ultimately, our work demonstrates that as the glass transition is approached, the sample can no longer be approximated as a continuum fluid when considering diffusion. PMID:23071311

  7. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    NASA Astrophysics Data System (ADS)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  8. Learning Inverse Rig Mappings by Nonlinear Regression.

    PubMed

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  9. Financial market dynamics: superdiffusive or not?

    NASA Astrophysics Data System (ADS)

    Devi, Sandhya

    2017-08-01

    The behavior of stock market returns over a period of 1-60 d has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model. In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from 1 d returns, lie in the range 1.4-1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between  -0.91 and  -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between  -1.25 and  -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit.

  10. The Kolmogorov-Obukhov Statistical Theory of Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, Björn

    2013-08-01

    In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.

  11. Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.

    PubMed

    Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard

    2011-02-01

    The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.

  12. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  13. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  14. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    PubMed

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  15. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  16. Effect of central obscuration on the LDR point spread function

    NASA Technical Reports Server (NTRS)

    Vanzyl, Jakob J.

    1988-01-01

    It is well known that Gaussian apodization of an aperture reduces the sidelobe levels of its point spread function (PSF). In the limit where the standard deviation of the Gaussian function is much smaller than the diameter of the aperture, the sidelobes completely disappear. However, when Gaussian apodization is applied to the Large Deployable Reflector (LDR) array consisting of 84 hexagonal panels, it is found that the sidelobe level only decreases by about 2.5 dB. The reason for this is explained. The PSF is shown for an array consisting of 91 uniformly illuminated hexagonal apertures; this array is identical to the LDR array, except that the central hole in the LDR array is filled with seven additional panels. For comparison, the PSF of the uniformly illuminated LDR array is shown. Notice that it is already evident that the sidelobe structure of the LDR array is different from that of the full array of 91 panels. The PSF's of the same two arrays are shown, but with the illumination apodized with a Gaussian function to have 20 dB tapering at the edges of the arrays. While the sidelobes of the full array have decreased dramatically, those of the LDR array changed in structure, but stayed at almost the same level. This result is not completely surprising, since the Gaussian apodization tends to emphasize the contributions from the central portion of the array; exactly where the hole in the LDR array is located. The two most important conclusions are: the size of the central hole should be minimized, and a simple Gaussian apodization scheme to suppress the sidelobes in the PSF should not be used. A more suitable apodization scheme would be a Gaussian annular ring.

  17. Geographically weighted regression model on poverty indicator

    NASA Astrophysics Data System (ADS)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  18. Gaussian basis functions for highly oscillatory scattering wavefunctions

    NASA Astrophysics Data System (ADS)

    Mant, B. P.; Law, M. M.

    2018-04-01

    We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.

  19. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  20. The influence of non-Gaussian distribution functions on the time-dependent perpendicular transport of energetic particles

    NASA Astrophysics Data System (ADS)

    Lasuik, J.; Shalchi, A.

    2018-06-01

    In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.

  1. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  2. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  3. Exact Distributions of Intraclass Correlation and Cronbach's Alpha with Gaussian Data and General Covariance

    ERIC Educational Resources Information Center

    Kistner, Emily O.; Muller, Keith E.

    2004-01-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact…

  4. Detection of nonlinear transfer functions by the use of Gaussian statistics

    NASA Technical Reports Server (NTRS)

    Sheppard, J. G.

    1972-01-01

    The possibility of using on-line signal statistics to detect electronic equipment nonlinearities is discussed. The results of an investigation using Gaussian statistics are presented, and a nonlinearity test that uses ratios of the moments of a Gaussian random variable is developed and discussed. An outline for further investigation is presented.

  5. Inversion Estimate of California Methane Emissions Using a Bayesian Inverse Model with Multi-Tower Greenhouse Gas Monitoring Network and Aircraft Measurements

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Falk, M.; Chen, Y.; Herner, J.; Croes, B. E.; Vijayan, A.

    2017-12-01

    Methane (CH4) is an important short-lived climate pollutant (SLCP), and the second most important greenhouse gas (GHG) in California which accounts for 9% of the statewide GHG emissions inventory. Over the years, California has enacted several ambitious climate change mitigation goals, including the California Global Warming Solutions Act of 2006 which requires ARB to reduce statewide GHG emissions to 1990 emission level by 2020, as well as Assembly Bill 1383 which requires implementation of a climate mitigation program to reduce statewide methane emissions by 40% below the 2013 levels. In order to meet these requirements, ARB has proposed a comprehensive SLCP Strategy with goals to reduce oil and gas related emissions and capture methane emissions from dairy operations and organic waste. Achieving these goals will require accurate understanding of the sources of CH4 emissions. Since direct monitoring of CH4 emission sources in large spatial and temporal scales is challenging and resource intensive, we developed a complex inverse technique combined with atmospheric three-dimensional (3D) transport model and atmospheric observations of CH4 concentrations from a regional tower network and aircraft measurements, to gain insights into emission sources in California. In this study, develop a comprehensive inversion estimate using available aircraft measurements from CalNex airborne campaigns (May-June 2010) and three years of hourly continuous measurements from the ARB Statewide GHG Monitoring Network (2014-2016). The inversion analysis is conducted using two independent 3D Lagrangian models (WRF-STILT and WRF-FLEXPART), with a variety of bottom-up prior inputs from national and regional inventories, as well as two different probability density functions (Gaussian and Lognormal). Altogether, our analysis provides a detailed picture of the spatially resolved CH4 emission sources and their temporal variation over a multi-year period.

  6. Stochastic Theory for the Clustering of Rapidly Settling, Low-Inertia Particle Pairs in Isotropic Turbulence - I

    NASA Astrophysics Data System (ADS)

    Gupta, Vijay; Rani, Sarma; Koch, Donald

    2017-11-01

    A stochastic theory is developed to predict the Radial Distribution Function (RDF) of monodisperse, rapidly settling, low-inertia particle pairs in isotropic turbulence. The theory is based on approximating the turbulent flow in a reference frame following an aerosol particle as a locally linear velocity field. In the first version of the theory (referred to as T1), the fluid velocity gradient tensor ``seen'' by the primary aerosol particle is further assumed to be Gaussian. Analytical closures are then derived for the drift and diffusive fluxes controling the RDF, in the asymptotic limits of small particle Stokes number (St =τp /τη << 1), and large dimensionless settling velocity (Sv = gτp /uη >> 1). It is seen that the RDF for rapidly settling pairs has an inverse power dependency on pair separation r with an exponent, c1, that is proportional to St2 . However, the c1 predicted by T1 for Sv >> 1 particles is higher than the c1 of even non-settling (Sv = 0) particles obtained from DNS of particle-laden isotropic turbulence. Thus, the Gaussian velocity gradient in T1 leads to the unphysical effect that gravity enhances pair clustering. To address this inconsistency, a second version (T2) was developed. Funding from the CBET Division of the National Science Foundation is gratefully acknowledged.

  7. Estimating crustal heterogeneity from double-difference tomography

    USGS Publications Warehouse

    Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.

    2006-01-01

    Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.

  8. Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions

    NASA Astrophysics Data System (ADS)

    Park, J. J.

    2017-12-01

    Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions Jeffrey Park, Yale University The interpretation of seismic receiver functions (RFs) in terms of isotropic and anisotropic layered structure can be complex. The relationship between structure and body-wave scattering is nonlinear. The anisotropy can involve more parameters than the observations can readily constrain. Finally, reflectivity-predicted layer reverberations are often not prominent in data, so that nonlinear waveform inversion can search in vain to match ghost signals. Multiple-taper correlation (MTC) receiver functions have uncertainties in the frequency domain that follow Gaussian statistics [Park and Levin, 2016a], so grid-searches for the best-fitting collections of interfaces can be performed rapidly to minimize weighted misfit variance. Tests for layer-reverberations can be performed in the frequency domain without reflectivity calculations, allowing flexible modelling of weak, but nonzero, reverberations. Park and Levin [2016b] linearized the hybridization of P and S body waves in an anisotropic layer to predict first-order Ps conversion amplitudes at crust and mantle interfaces. In an anisotropic layer, the P wave acquires small SV and SH components. To ensure continuity of displacement and traction at the top and bottom boundaries of the layer, shear waves are generated. Assuming hexagonal symmetry with an arbitrary symmetry axis, theory confirms the empirical stacking trick of phase-shifting transverse RFs by 90 degrees in back-azimuth [Shiomi and Park, 2008; Schulte-Pelkum and Mahan, 2014] to enhance 2-lobed and 4-lobed harmonic variation. Ps scattering is generated by sharp interfaces, so that RFs resemble the first derivative of the model. MTC RFs in the frequency domain can be manipulated to obtain a first-order reconstruction of the layered anisotropy, under the above modeling constraints and neglecting reverberations. Examples from long-running continental stations will be discussed. Park, J., and V. Levin, 2016a. doi:10.1093/gji/ggw291. Park, J., and V. Levin, 2016b. doi:10.1093/gji/ggw323. Schulte-Pelkum, V., and Mahan, K. H., 2014. doi:10.1007/s00024-014-0853-4. Shiomi, K., & Park, J., 2008. doi:10.1029/2007JB005535.

  9. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  10. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  11. A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.

    PubMed

    Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo

    2016-01-01

    In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.

  12. Equivalent peak resolution: characterization of the extent of separation for two components based on their relative peak overlap.

    PubMed

    Dvořák, Martin; Svobodová, Jana; Dubský, Pavel; Riesová, Martina; Vigh, Gyula; Gaš, Bohuslav

    2015-03-01

    Although the classical formula of peak resolution was derived to characterize the extent of separation only for Gaussian peaks of equal areas, it is often used even when the peaks follow non-Gaussian distributions and/or have unequal areas. This practice can result in misleading information about the extent of separation in terms of the severity of peak overlap. We propose here the use of the equivalent peak resolution value, a term based on relative peak overlap, to characterize the extent of separation that had been achieved. The definition of equivalent peak resolution is not constrained either by the form(s) of the concentration distribution function(s) of the peaks (Gaussian or non-Gaussian) or the relative area of the peaks. The equivalent peak resolution value and the classically defined peak resolution value are numerically identical when the separated peaks are Gaussian and have identical areas and SDs. Using our new freeware program, Resolution Analyzer, one can calculate both the classically defined and the equivalent peak resolution values. With the help of this tool, we demonstrate here that the classical peak resolution values mischaracterize the extent of peak overlap even when the peaks are Gaussian but have different areas. We show that under ideal conditions of the separation process, the relative peak overlap value is easily accessible by fitting the overall peak profile as the sum of two Gaussian functions. The applicability of the new approach is demonstrated on real separations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Zitterbewegung in time-reversal Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Huang, Tongyun; Ma, Tianxing; Wang, Li-Gang

    2018-06-01

    We perform a systematic study of the Zitterbewegung effect of fermions, which are described by a Gaussian wave with broken spatial-inversion symmetry in a three-dimensional low-energy Weyl semimetal. Our results show that the motion of fermions near the Weyl points is characterized by rectilinear motion and Zitterbewegung oscillation. The ZB oscillation is affected by the width of the Gaussian wave packet, the position of the Weyl node, and the chirality and anisotropy of the fermions. By introducing a one-dimensional cosine potential, the new generated massless fermions have lower Fermi velocities, which results in a robust relativistic oscillation. Modulating the height and periodicity of periodic potential demonstrates that the ZB effect of fermions in the different Brillouin zones exhibits quasi-periodic behavior. These results may provide an appropriate system for probing the Zitterbewegung effect experimentally.

  14. Non-Gaussian precision metrology via driving through quantum phase transitions

    NASA Astrophysics Data System (ADS)

    Huang, Jiahao; Zhuang, Min; Lee, Chaohong

    2018-03-01

    We propose a scheme to realize high-precision quantum interferometry with entangled non-Gaussian states by driving the system through quantum phase transitions. The beam splitting, in which an initial nondegenerate ground state evolves into a highly entangled state, is achieved by adiabatically driving the system from a nondegenerate regime to a degenerate one. Inversely, the beam recombination, in which the output state after interrogation becomes gradually disentangled, is accomplished by adiabatically driving the system from the degenerate regime to the nondegenerate one. The phase shift, which is accumulated in the interrogation process, can then be easily inferred via population measurement. We apply our scheme to Bose condensed atoms and trapped ions and find that Heisenberg-limited precision scalings can be approached. Our proposed scheme does not require single-particle resolved detection and is within the reach of current experiment techniques.

  15. Study of the intensity noise and intensity modulation in a of hybrid soliton pulsed source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogru, Nuran; Oziazisi, M Sadetin

    2005-10-31

    The relative intensity noise (RIN) and small-signal intensity modulation (IM) of a hybrid soliton pulsed source (HSPS) with a linearly chirped Gaussian apodised fibre Bragg grating (FBG) are considered in the electric-field approximation. The HSPS is described by solving the dynamic coupled-mode equations. It is shown that consideration of the carrier density noise in the HSPS in addition to the spontaneous noise is necessary to analyse accurately noise in the mode-locked HSPS. It is also shown that the resonance peak spectral splitting (RPSS) of the IM near the frequency inverse to the round-trip time of light in the external cavitymore » can be eliminated by selecting an appropriate linear chirp rate in the Gaussian apodised FBG. (laser applications and other topics in quantum electronics)« less

  16. Gaussian Boson Sampling.

    PubMed

    Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor

    2017-10-27

    Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.

  17. MATHEMATICAL ROUTINES FOR ENGINEERS AND SCIENTISTS

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The purpose of this package is to provide the scientific and engineering community with a library of programs useful for performing routine mathematical manipulations. This collection of programs will enable scientists to concentrate on their work without having to write their own routines for solving common problems, thus saving considerable amounts of time. This package contains sixteen subroutines. Each is separately documented with descriptions of the invoking subroutine call, its required parameters, and a sample test program. The functions available include: maxima, minima, and sort of vectors; factorials; random number generator (uniform or Gaussian distribution); complimentary error function; fast Fourier Transformation; Simpson's Rule integration; matrix determinate and inversion; Bessel function (J Bessel function for any order, and modified Bessel function for zero order); roots of a polynomial; roots of non-linear equation; and the solution of first order ordinary differential equations using Hamming's predictor-corrector method. There is also a subroutine for using a dot matrix printer to plot a given set of y values for a uniformly increasing x value. This package is written in FORTRAN 77 (Super Soft Small System FORTRAN compiler) for batch execution and has been implemented on the IBM PC computer series under MS-DOS with a central memory requirement of approximately 28K of 8 bit bytes for all subroutines. This program was developed in 1986.

  18. Description of an α-cluster tail in 8Be and 20Ne: Delocalization of the α cluster by quantum penetration

    NASA Astrophysics Data System (ADS)

    Kanada-En'yo, Yoshiko

    2014-10-01

    We analyze the α-cluster wave functions in cluster states of ^8Be and ^{20}Ne by comparing the exact relative wave function obtained by the generator coordinate method (GCM) with various types of trial functions. For the trial functions, we adopt the fixed range shifted Gaussian of the Brink-Bloch (BB) wave function, the spherical Gaussian with the adjustable range parameter of the spherical Tohsaki-Horiuchi-Schuck-Röpke (sTHSR), the deformed Gaussian of the deformed THSR (dTHSR), and a function with the Yukawa tail (YT). The quality of the description of the exact wave function with a trial function is judged by the squared overlap between the trial function and the GCM wave function. A better result is obtained with the sTHSR wave function than the BB wave function, and further improvement can be made with the dTHSR wave function because these wave functions can describe the outer tail better. The YT wave function gives almost an equal quality to or even better quality than the dTHSR wave function, indicating that the outer tail of α-cluster states is characterized by the Yukawa-like tail rather than the Gaussian tail. In weakly bound α-cluster states with small α separation energy and the low centrifugal and Coulomb barriers, the outer tail part is the slowly damping function described well by the quantum penetration through the effective barrier. This outer tail characterizes the almost zero-energy free α gas behavior, i.e., the delocalization of the cluster.

  19. Some Modified Integrated Squared Error Procedures for Multivariate Normal Data.

    DTIC Science & Technology

    1982-06-01

    p-dimensional Gaussian. There are a number of measures of qualitative robustness but the most important is the influence function . Most of the other...measures are derived from the influence function . The influence function is simply proportional to the score function (Huber, 1981, p. 45 ). The... influence function at the p-variate Gaussian distribution Np (UV) is as -1P IC(x; ,N) = IE&) ;-") sD=XV = (I+c) (p+2)(x-p) exp(- ! (x-p) TV-.1-)) (3.6

  20. A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry

    ERIC Educational Resources Information Center

    Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan

    2013-01-01

    A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…

  1. Leading non-Gaussian corrections for diffusion orientation distribution function.

    PubMed

    Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali

    2014-02-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.

  2. Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function

    PubMed Central

    Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali

    2014-01-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143

  3. Elegant Gaussian beams for enhanced optical manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpmann, Christina, E-mail: c.alpmann@uni-muenster.de; Schöler, Christoph; Denz, Cornelia

    2015-06-15

    Generation of micro- and nanostructured complex light beams attains increasing impact in photonics and laser applications. In this contribution, we demonstrate the implementation and experimental realization of the relatively unknown, but highly versatile class of complex-valued Elegant Hermite- and Laguerre-Gaussian beams. These beams create higher trapping forces compared to standard Gaussian light fields due to their propagation changing properties. We demonstrate optical trapping and alignment of complex functional particles as nanocontainers with standard and Elegant Gaussian light beams. Elegant Gaussian beams will inspire manifold applications in optical manipulation, direct laser writing, or microscopy, where the design of the point-spread functionmore » is relevant.« less

  4. Measurements and Analysis of Reverberation and Clutter Data

    DTIC Science & Technology

    2007-04-01

    triplet arrays and the DRDC ar- ray with combined omnidirectional and dipole sensors. A fast shallow water reverberation model was extended to...Bistatic reverberation models are too slow for inversion, but model-data comparisons will be made using ray -based models, e.g. GSM [11], or normal-mode...July 2000, pp. 1183–1188, European Commission, Luxembourg. Meeting held at Lyon, France. [36] Weinberg, H. and Keenan, R. E. (1996), Gaussian ray

  5. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions.

    PubMed

    Brown, James; Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  6. Multi-pose facial correction based on Gaussian process with combined kernel function

    NASA Astrophysics Data System (ADS)

    Shi, Shuyan; Ji, Ruirui; Zhang, Fan

    2018-04-01

    In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.

  7. The weighted function method: A handy tool for flood frequency analysis or just a curiosity?

    NASA Astrophysics Data System (ADS)

    Bogdanowicz, Ewa; Kochanek, Krzysztof; Strupczewski, Witold G.

    2018-04-01

    The idea of the Weighted Function (WF) method for estimation of Pearson type 3 (Pe3) distribution introduced by Ma in 1984 has been revised and successfully applied for shifted inverse Gaussian (IGa3) distribution. Also the conditions of WF applicability to a shifted distribution have been formulated. The accuracy of WF flood quantiles for both Pe3 and IGa3 distributions was assessed by Monte Caro simulations under the true and false distribution assumption versus the maximum likelihood (MLM), moment (MOM) and L-moments (LMM) methods. Three datasets of annual peak flows of Polish catchments serve the case studies to compare the results of the WF, MOM, MLM and LMM performance for the real flood data. For the hundred-year flood the WF method revealed the explicit superiority only over the MLM surpassing the MOM and especially LMM both for the true and false distributional assumption with respect to relative bias and relative mean root square error values. Generally, the WF method performs well and for hydrological sample size and constitutes good alternative for the estimation of the flood upper quantiles.

  8. Application of Gaussian Process Modeling to Analysis of Functional Unreliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Youngblood

    2014-06-01

    This paper applies Gaussian Process (GP) modeling to analysis of the functional unreliability of a “passive system.” GPs have been used widely in many ways [1]. The present application uses a GP for emulation of a system simulation code. Such an emulator can be applied in several distinct ways, discussed below. All applications illustrated in this paper have precedents in the literature; the present paper is an application of GP technology to a problem that was originally analyzed [2] using neural networks (NN), and later [3, 4] by a method called “Alternating Conditional Expectations” (ACE). This exercise enables a multifacetedmore » comparison of both the processes and the results. Given knowledge of the range of possible values of key system variables, one could, in principle, quantify functional unreliability by sampling from their joint probability distribution, and performing a system simulation for each sample to determine whether the function succeeded for that particular setting of the variables. Using previously available system simulation codes, such an approach is generally impractical for a plant-scale problem. It has long been recognized, however, that a well-trained code emulator or surrogate could be used in a sampling process to quantify certain performance metrics, even for plant-scale problems. “Response surfaces” were used for this many years ago. But response surfaces are at their best for smoothly varying functions; in regions of parameter space where key system performance metrics may behave in complex ways, or even exhibit discontinuities, response surfaces are not the best available tool. This consideration was one of several that drove the work in [2]. In the present paper, (1) the original quantification of functional unreliability using NN [2], and later ACE [3], is reprised using GP; (2) additional information provided by the GP about uncertainty in the limit surface, generally unavailable in other representations, is discussed; (3) a simple forensic exercise is performed, analogous to the inverse problem of code calibration, but with an accident management spin: given an observation about containment pressure, what can we say about the system variables? References 1. For an introduction to GPs, see (for example) Gaussian Processes for Machine Learning, C. E. Rasmussen and C. K. I. Williams (MIT, 2006). 2. Reliability Quantification of Advanced Reactor Passive Safety Systems, J. J. Vandenkieboom, PhD Thesis (University of Michigan, 1996). 3. Z. Cui, J. C. Lee, J. J. Vandenkieboom, and R. W. Youngblood, “Unreliability Quantification of a Containment Cooling System through ACE and ANN Algorithms,” Trans. Am. Nucl. Soc. 85, 178 (2001). 4. Risk and Safety Analysis of Nuclear Systems, J. C. Lee and N. J. McCormick (Wiley, 2011). See especially §11.2.4.« less

  9. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    NASA Technical Reports Server (NTRS)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  10. Two-time correlation function of an open quantum system in contact with a Gaussian reservoir

    NASA Astrophysics Data System (ADS)

    Ban, Masashi; Kitajima, Sachiko; Shibata, Fumiaki

    2018-05-01

    An exact formula of a two-time correlation function is derived for an open quantum system which interacts with a Gaussian thermal reservoir. It is provided in terms of functional derivative with respect to fictitious fields. A perturbative expansion and its diagrammatic representation are developed, where the small expansion parameter is related to a correlation time of the Gaussian thermal reservoir. The two-time correlation function of the lowest order is equivalent to that calculated by means of the quantum regression theorem. The result clearly shows that the violation of the quantum regression theorem is caused by a finiteness of the reservoir correlation time. By making use of an exactly solvable model consisting of a two-level system and a set of harmonic oscillators, it is shown that the two-time correlation function up to the first order is a good approximation to the exact one.

  11. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  12. Tables Of Gaussian-Type Orbital Basis Functions

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1992-01-01

    NASA technical memorandum contains tables of estimated Hartree-Fock wave functions for atoms lithium through neon and potassium through krypton. Sets contain optimized Gaussian-type orbital exponents and coefficients, and near Hartree-Fock quality. Orbital exponents optimized by minimizing restricted Hartree-Fock energy via scaled Newton-Raphson scheme in which Hessian evaluated numerically by use of analytically determined gradients.

  13. Complete stability of delayed recurrent neural networks with Gaussian activation functions.

    PubMed

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2017-01-01

    This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3 k equilibrium points with 0≤k≤n, among which 2 k and 3 k -2 k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Operational quantification of continuous-variable correlations.

    PubMed

    Rodó, Carles; Adesso, Gerardo; Sanpera, Anna

    2008-03-21

    We quantify correlations (quantum and/or classical) between two continuous-variable modes as the maximal number of correlated bits extracted via local quadrature measurements. On Gaussian states, such "bit quadrature correlations" majorize entanglement, reducing to an entanglement monotone for pure states. For non-Gaussian states, such as photonic Bell states, photon-subtracted states, and mixtures of Gaussian states, the bit correlations are shown to be a monotonic function of the negativity. This quantification yields a feasible, operational way to measure non-Gaussian entanglement in current experiments by means of direct homodyne detection, without a complete state tomography.

  15. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  16. Gaussian Finite Element Method for Description of Underwater Sound Diffraction

    NASA Astrophysics Data System (ADS)

    Huang, Dehua

    A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.

  17. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  18. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  19. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  20. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less

  1. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  2. Gaussian-Beam Laser-Resonator Program

    NASA Technical Reports Server (NTRS)

    Cross, Patricia L.; Bair, Clayton H.; Barnes, Norman

    1989-01-01

    Gaussian Beam Laser Resonator Program models laser resonators by use of Gaussian-beam-propagation techniques. Used to determine radii of beams as functions of position in laser resonators. Algorithm used in program has three major components. First, ray-transfer matrix for laser resonator must be calculated. Next, initial parameters of beam calculated. Finally, propagation of beam through optical elements computed. Written in Microsoft FORTRAN (Version 4.01).

  3. Exact evaluations of some Meijer G-functions and probability of all eigenvalues real for the product of two Gaussian matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2015-11-01

    We provide a proof to a recent conjecture by Forrester (2014 J. Phys. A: Math. Theor. 47 065202) regarding the algebraic and arithmetic structure of Meijer G-functions which appear in the expression for probability of all eigenvalues real for the product of two real Gaussian matrices. In the process we come across several interesting identities involving Meijer G-functions.

  4. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  5. Three-dimensional joint inversion for magnetotelluric resistivity and static shift distributions in complex media

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Meju, Max A.

    2006-05-01

    Accurate interpretation of magnetotelluric (MT) data in the presence of static shift arising from near-surface inhomogeneities is an unresolved problem in three-dimensional (3-D) inversion. While it is well known in 1-D and 2-D studies that static shift can lead to erroneous interpretation, how static shift can influence the result of 3-D inversion is not fully understood and is relevant to improved subsurface analysis. Using the synthetic data generated from 3-D models with randomly distributed heterogeneous overburden and elongate homogeneous overburden that are consistent with geological observations, this paper examines the effects of near-surface inhomogeneity on the accuracy of 3-D inversion models. It is found that small-scale and shallow depth structures are severely distorted while the large-scale structure is marginally distorted in 3-D inversion not accounting for static shift; thus the erroneous near-surface structure does degrade the reconstruction of smaller-scale structure at any depth. However, 3-D joint inversion for resistivity and static shift significantly reduces the artifacts caused by static shifts and improves the overall resolution, irrespective of whether a zero-sum or Gaussian distribution of static shifts is assumed. The 3-D joint inversion approach works equally well for situations where the shallow bodies are of small size or long enough to allow some induction such that the effects of near-surface inhomogeneity are manifested as a frequency-dependent shift rather than a constant shift.

  6. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  7. Improvement of Mishchenko's T-matrix code for absorbing particles.

    PubMed

    Moroz, Alexander

    2005-06-10

    The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.

  8. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  9. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  10. Precise Determination of the Absorption Maximum in Wide Bands

    ERIC Educational Resources Information Center

    Eriksson, Karl-Hugo; And Others

    1977-01-01

    A precise method of determining absorption maxima where Gaussian functions occur is described. The method is based on a logarithmic transformation of the Gaussian equation and is suited for a mini-computer. (MR)

  11. Effective one-dimensional approach to the source reconstruction problem of three-dimensional inverse optoacoustics

    NASA Astrophysics Data System (ADS)

    Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.

    2017-09-01

    The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.

  12. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  13. A semi-automated method for the detection of seismic anisotropy at depth via receiver function analysis

    NASA Astrophysics Data System (ADS)

    Licciardi, A.; Piana Agostinetti, N.

    2016-06-01

    Information about seismic anisotropy is embedded in the variation of the amplitude of the Ps pulses as a function of the azimuth, on both the Radial and the Transverse components of teleseismic receiver functions (RF). We develop a semi-automatic method to constrain the presence and the depth of anisotropic layers beneath a single seismic broad-band station. An algorithm is specifically designed to avoid trial and error methods and subjective crustal parametrizations in RF inversions, providing a suitable tool for large-size data set analysis. The algorithm couples together information extracted from a 1-D VS profile and from a harmonic decomposition analysis of the RF data set. This information is used to determine the number of anisotropic layers and their approximate position at depth, which, in turn, can be used to, for example, narrow the search boundaries for layer thickness and S-wave velocity in a subsequent parameter space search. Here, the output of the algorithm is used to invert an RF data set by means of the Neighbourhood Algorithm (NA). To test our methodology, we apply the algorithm to both synthetic and observed data. We make use of synthetic RF with correlated Gaussian noise to investigate the resolution power for multiple and thin (1-3 km) anisotropic layers in the crust. The algorithm successfully identifies the number and position of anisotropic layers at depth prior the NA inversion step. In the NA inversion, strength of anisotropy and orientation of the symmetry axis are correctly retrieved. Then, the method is applied to field measurement from station BUDO in the Tibetan Plateau. Two consecutive layers of anisotropy are automatically identified with our method in the first 25-30 km of the crust. The data are then inverted with the retrieved parametrization. The direction of the anisotropic axis in the uppermost layer correlates well with the orientation of the major planar structure in the area. The deeper anisotropic layer is associated with an older phase of crustal deformation. Our results are compared with previous anisotropic RF studies at the same station, showing strong similarities.

  14. Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    Artyushenko, V. M.; Volovach, V. I.

    2018-01-01

    We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.

  15. What Can Be Learned from Inverse Statistics?

    NASA Astrophysics Data System (ADS)

    Ahlgren, Peter Toke Heden; Dahl, Henrik; Jensen, Mogens Høgh; Simonsen, Ingve

    One stylized fact of financial markets is an asymmetry between the most likely time to profit and to loss. This gain-loss asymmetry is revealed by inverse statistics, a method closely related to empirically finding first passage times. Many papers have presented evidence about the asymmetry, where it appears and where it does not. Also, various interpretations and explanations for the results have been suggested. In this chapter, we review the published results and explanations. We also examine the results and show that some are at best fragile. Similarly, we discuss the suggested explanations and propose a new model based on Gaussian mixtures. Apart from explaining the gain-loss asymmetry, this model also has the potential to explain other stylized facts such as volatility clustering, fat tails, and power law behavior of returns.

  16. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  17. On Nonlinear Functionals of Random Spherical Eigenfunctions

    NASA Astrophysics Data System (ADS)

    Marinucci, Domenico; Wigman, Igor

    2014-05-01

    We prove central limit theorems and Stein-like bounds for the asymptotic behaviour of nonlinear functionals of spherical Gaussian eigenfunctions. Our investigation combines asymptotic analysis of higher order moments for Legendre polynomials and, in addition, recent results on Malliavin calculus and total variation bounds for Gaussian subordinated fields. We discuss applications to geometric functionals like the defect and invariant statistics, e.g., polyspectra of isotropic spherical random fields. Both of these have relevance for applications, especially in an astrophysical environment.

  18. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  19. Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection

    NASA Astrophysics Data System (ADS)

    Amiri, Ali; Fathy, Mahmood

    2010-12-01

    This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.

  20. Variational method for calculating the binding energy of the base state of an impurity D- centered on a quantum dot of GaAs-Ga1-xAlxAs

    NASA Astrophysics Data System (ADS)

    Durán-Flórez, F.; Caicedo, L. C.; Gonzalez, J. E.

    2018-04-01

    In quantum mechanics it is very difficult to obtain exact solutions, therefore, it is necessary to resort to tools and methods that facilitate the calculations of the solutions of these systems, one of these methods is the variational method that consists in proposing a wave function that depend on several parameters that are adjusted to get close to the exact solution. Authors in the past have performed calculations applying this method using exponential and Gaussian orbital functions with linear and quadratic correlation factors. In this paper, a Gaussian function with a linear correlation factor is proposed, for the calculation of the binding energy of an impurity D ‑ centered on a quantum dot of radius r, the Gaussian function is dependent on the radius of the quantum dot.

  1. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  2. Normal form decomposition for Gaussian-to-Gaussian superoperators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Palma, Giacomo; INFN, Pisa; Mari, Andrea

    2015-05-15

    In this paper, we explore the set of linear maps sending the set of quantum Gaussian states into itself. These maps are in general not positive, a feature which can be exploited as a test to check whether a given quantum state belongs to the convex hull of Gaussian states (if one of the considered maps sends it into a non-positive operator, the above state is certified not to belong to the set). Generalizing a result known to be valid under the assumption of complete positivity, we provide a characterization of these Gaussian-to-Gaussian (not necessarily positive) superoperators in terms ofmore » their action on the characteristic function of the inputs. For the special case of one-mode mappings, we also show that any Gaussian-to-Gaussian superoperator can be expressed as a concatenation of a phase-space dilatation, followed by the action of a completely positive Gaussian channel, possibly composed with a transposition. While a similar decomposition is shown to fail in the multi-mode scenario, we prove that it still holds at least under the further hypothesis of homogeneous action on the covariance matrix.« less

  3. Axial acoustic radiation force on a sphere in Gaussian field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Rongrong; Liu, Xiaozhou, E-mail: xzliu@nju.edu.cn; Gong, Xiufen

    2015-10-28

    Based on the finite series method, the acoustical radiation force resulting from a Gaussian beam incident on a spherical object is investigated analytically. When the position of the particles deviating from the center of the beam, the Gaussian beam is expanded as a spherical function at the center of the particles and the expanded coefficients of the Gaussian beam is calculated. The analytical expression of the acoustic radiation force on spherical particles deviating from the Gaussian beam center is deduced. The acoustic radiation force affected by the acoustic frequency and the offset distance from the Gaussian beam center is investigated.more » Results have been presented for Gaussian beams with different wavelengths and it has been shown that the interaction of a Gaussian beam with a sphere can result in attractive axial force under specific operational conditions. Results indicate the capability of manipulating and separating spherical spheres based on their mechanical and acoustical properties, the results provided here may provide a theoretical basis for development of single-beam acoustical tweezers.« less

  4. Large fluctuations of the macroscopic current in diffusive systems: a numerical test of the additivity principle.

    PubMed

    Hurtado, Pablo I; Garrido, Pedro L

    2010-04-01

    Most systems, when pushed out of equilibrium, respond by building up currents of locally conserved observables. Understanding how microscopic dynamics determines the averages and fluctuations of these currents is one of the main open problems in nonequilibrium statistical physics. The additivity principle is a theoretical proposal that allows to compute the current distribution in many one-dimensional nonequilibrium systems. Using simulations, we validate this conjecture in a simple and general model of energy transport, both in the presence of a temperature gradient and in canonical equilibrium. In particular, we show that the current distribution displays a Gaussian regime for small current fluctuations, as prescribed by the central limit theorem, and non-Gaussian (exponential) tails for large current deviations, obeying in all cases the Gallavotti-Cohen fluctuation theorem. In order to facilitate a given current fluctuation, the system adopts a well-defined temperature profile different from that of the steady state and in accordance with the additivity hypothesis predictions. System statistics during a large current fluctuation is independent of the sign of the current, which implies that the optimal profile (as well as higher-order profiles and spatial correlations) are invariant upon current inversion. We also demonstrate that finite-time joint fluctuations of the current and the profile are well described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Brunner; E. Valeo

    Simulations of electron transport are carried out by solving the Fokker-Planck equation in the diffusive approximation. The system of a single laser hot spot, with open boundary conditions, is systematically studied by performing a scan over a wide range of the two relevant parameters: (1) Ratio of the stopping length over the width of the hot spot. (2) Relative importance of the heating through inverse Bremsstrahlung compared to the thermalization through self-collisions. As for uniform illumination [J.P. Matte et al., Plasma Phys. Controlled Fusion 30 (1988) 1665], the bulk of the velocity distribution functions (VDFs) present a super-Gaussian dependence. However,more » as a result of spatial transport, the tails are observed to be well represented by a Maxwellian. A similar dependence of the distributions is also found for multiple hot spot systems. For its relevance with respect to stimulated Raman scattering, the linear Landau damping of the electron plasma wave is estimated for such VD Fs. Finally, the nonlinear Fokker-Planck simulations of the single laser hot spot system are also compared to the results obtained with the linear non-local hydrodynamic approach [A.V. Brantov et al., Phys. Plasmas 5 (1998) 2742], thus providing a quantitative limit to the latter method: The hydrodynamic approach presents more than 10% inaccuracy in the presence of temperature variations of the order delta T/T greater than or equal to 1%, and similar levels of deformation of the Gaussian shape of the Maxwellian background.« less

  6. Beam wander characteristics of flat-topped, dark hollow, cos and cosh-Gaussian, J0- and I0- Bessel Gaussian beams propagating in turbulent atmosphere: a review

    NASA Astrophysics Data System (ADS)

    Eyyuboğlu, Halil T.; Baykal, Yahya; Çil, Celal Z.; Korotkova, Olga; Cai, Yangjian

    2010-02-01

    In this paper we review our work done in the evaluations of the root mean square (rms) beam wander characteristics of the flat-topped, dark hollow, cos-and cosh Gaussian, J0-Bessel Gaussian and the I0-Bessel Gaussian beams in atmospheric turbulence. Our formulation is based on the wave-treatment approach, where not only the beam sizes but the source beam profiles are taken into account as well. In this approach the first and the second statistical moments are obtained from the Rytov series under weak atmospheric turbulence conditions and the beam size are determined as a function of the propagation distance. It is found that after propagating in atmospheric turbulence, under certain conditions, the collimated flat-topped, dark hollow, cos- and cosh Gaussian, J0-Bessel Gaussian and the I0-Bessel Gaussian beams have smaller rms beam wander compared to that of the Gaussian beam. The beam wander of these beams are analyzed against the propagation distance, source spot sizes, and against specific beam parameters related to the individual beam such as the relative amplitude factors of the constituent beams, the flatness parameters, the beam orders, the displacement parameters, the width parameters, and are compared against the corresponding Gaussian beam.

  7. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  8. Mean intensity of the fundamental Bessel-Gaussian beam in turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Lukin, Igor P.

    2017-11-01

    In the given article mean intensity of a fundamental Bessel-Gaussian optical beam in turbulent atmosphere is studied. The problem analysis is based on the solution of the equation for the transverse second-order mutual coherence function of a fundamental Bessel-Gaussian beam of optical radiation. Distributions of mean intensity of a fundamental Bessel- Gaussian beam optical beam in longitudinal and transverse to a direction of propagation of optical radiation are investigated in detail. Influence of atmospheric turbulence on change of radius of the central part of a Bessel optical beam is estimated. Values of parameters at which it is possible to generate in turbulent atmosphere a nondiffracting pseudo-Bessel optical beam by means of a fundamental Bessel-Gaussian optical beam are established.

  9. A Gaussian framework for modeling effects of frequency-dependent attenuation, frequency-dependent scattering, and gating.

    PubMed

    Wear, Keith A

    2002-11-01

    For a wide range of applications in medical ultrasound, power spectra of received signals are approximately Gaussian. It has been established previously that an ultrasound beam with a Gaussian spectrum propagating through a medium with linear attenuation remains Gaussian. In this paper, Gaussian transformations are derived to model the effects of scattering (according to a power law, as is commonly applicable in soft tissues, especially over limited frequency ranges) and gating (with a Hamming window, a commonly used gate function). These approximations are shown to be quite accurate even for relatively broad band systems with fractional bandwidths approaching 100%. The theory is validated by experiments in phantoms consisting of glass particles suspended in agar.

  10. Poly-Gaussian model of randomly rough surface in rarefied gas flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksenova, Olga A.; Khalidov, Iskander A.

    2014-12-09

    Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less

  11. Tests for Gaussianity of the MAXIMA-1 cosmic microwave background map.

    PubMed

    Wu, J H; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D

    2001-12-17

    Gaussianity of the cosmological perturbations is one of the key predictions of standard inflation, but it is violated by other models of structure formation such as cosmic defects. We present the first test of the Gaussianity of the cosmic microwave background (CMB) on subdegree angular scales, where deviations from Gaussianity are most likely to occur. We apply the methods of moments, cumulants, the Kolmogorov test, the chi(2) test, and Minkowski functionals in eigen, real, Wiener-filtered, and signal-whitened spaces, to the MAXIMA-1 CMB anisotropy data. We find that the data, which probe angular scales between 10 arcmin and 5 deg, are consistent with Gaussianity. These results show consistency with the standard inflation and place constraints on the existence of cosmic defects.

  12. Solute transport in aquifers: The comeback of the advection dispersion equation and the First Order Approximation

    NASA Astrophysics Data System (ADS)

    Fiori, A.; Zarlenga, A.; Jankovic, I.; Dagan, G.

    2017-12-01

    Natural gradient steady flow of mean velocity U takes place in heterogeneous aquifers of random logconductivity Y = lnK , characterized by the normal univariate PDF f(Y) and autocorrelation ρY, of variance σY2 and horizontal integral scale I. Solute transport is quantified by the Breakthrough Curve (BTC) M at planes at distance x from the injection plane. The study builds on the extensive 3D numerical simulations of flow and transport of Jankovic et al. (2017) for different conductivity structures. The present study further explores the predictive capabilities of the Advection Dispersion Equation (ADE), with macrodispersivity αL given by the First Order Approximation (FOA), by checking in a quantitative manner its applicability. After a discussion on the suitable boundary conditions for ADE, we find that the ADE-FOA solution is a sufficiently accurate predictor for applications, the many other sources of uncertainty prevailing in practice notwithstanding. We checked by least squares and by comparison of travel time of quantiles of M that indeed the analytical Inverse Gaussian M with αL =σY2 I , is able to fit well the bulk of the simulated BTCs. It tends to underestimate the late arrival time of the thin and persistent tail. The tail is better reproduced by the semi-analytical MIMSCA model, which also allows for a physical explanation of the success of the Inverse Gaussian solution. Examination of the pertinent longitudinal mass distribution shows that it is different from the commonly used Gaussian one in the analysis of field experiments, and it captures the main features of the plume measurements of the MADE experiment. The results strengthen the confidence in the applicability of the ADE and the FOA to predicting longitudinal spreading in solute transport through heterogeneous aquifers of stationary random structure.

  13. On uncertainty quantification in hydrogeology and hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  14. Matrix elements of explicitly correlated Gaussian basis functions with arbitrary angular momentum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, Tennesse; Varga, Kálmán

    2016-05-14

    A new algorithm for calculating the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for quantum-mechanical calculations of atoms with arbitrary angular momentum is presented. The calculations are checked on several excited states of three and four electron systems. The presented formalism can be used as unified framework for high accuracy calculations of properties of small atoms and molecules.

  15. Effects of scale-dependent non-Gaussianity on cosmological structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Miller, Amber; Shandera, Sarah

    2008-04-15

    The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less

  16. Bivariate- distribution for transition matrix elements in Breit-Wigner to Gaussian domains of interacting particle systems.

    PubMed

    Kota, V K B; Chavda, N D; Sahu, R

    2006-04-01

    Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.

  17. Coherence degree of the fundamental Bessel-Gaussian beam in turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Lukin, Igor P.

    2017-11-01

    In this article the coherence of a fundamental Bessel-Gaussian optical beam in turbulent atmosphere is analyzed. The problem analysis is based on the solution of the equation for the transverse second-order mutual coherence function of a fundamental Bessel-Gaussian optical beam of optical radiation. The behavior of a coherence degree of a fundamental Bessel-Gaussian optical beam depending on parameters of an optical beam and characteristics of turbulent atmosphere is examined. It was revealed that at low levels of fluctuations in turbulent atmosphere the coherence degree of a fundamental Bessel-Gaussian optical beam has the characteristic oscillating appearance. At high levels of fluctuations in turbulent atmosphere the coherence degree of a fundamental Bessel-Gaussian optical beam is described by an one-scale decreasing curve which in process of increase of level of fluctuations on a line of formation of a laser beam becomes closer to the same characteristic of a spherical optical wave.

  18. Coherence of the vortex Bessel-Gaussian beam in turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Lukin, Igor P.

    2017-11-01

    In this paper the theoretical research of coherent properties of the vortex Bessel-Gaussian optical beams propagating in turbulent atmosphere are developed. The approach to the analysis of this problem is based on the analytical solution of the equation for the transverse second-order mutual coherence function of a field of optical radiation. The behavior of integral scale of coherence degree of vortex Bessel-Gaussian optical beams depending on parameters of an optical beam and characteristics of turbulent atmosphere is particularly considered. It is shown that the integral scale of coherence degree of a vortex Bessel-Gaussian optical beam essentially depends on value of a topological charge of a vortex optical beam. With increase in a topological charge of a vortex Bessel-Gaussian optical beam the value of integral scale of coherence degree of a vortex Bessel-Gaussian optical beam are decreased.

  19. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.

  20. Investigating Einstein-Podolsky-Rosen steering of continuous-variable bipartite states by non-Gaussian pseudospin measurements

    NASA Astrophysics Data System (ADS)

    Xiang, Yu; Xu, Buqing; Mišta, Ladislav; Tufarelli, Tommaso; He, Qiongyi; Adesso, Gerardo

    2017-10-01

    Einstein-Podolsky-Rosen (EPR) steering is an asymmetric form of correlations which is intermediate between quantum entanglement and Bell nonlocality, and can be exploited as a resource for quantum communication with one untrusted party. In particular, steering of continuous-variable Gaussian states has been extensively studied theoretically and experimentally, as a fundamental manifestation of the EPR paradox. While most of these studies focused on quadrature measurements for steering detection, two recent works revealed that there exist Gaussian states which are only steerable by suitable non-Gaussian measurements. In this paper we perform a systematic investigation of EPR steering of bipartite Gaussian states by pseudospin measurements, complementing and extending previous findings. We first derive the density-matrix elements of two-mode squeezed thermal Gaussian states in the Fock basis, which may be of independent interest. We then use such a representation to investigate steering of these states as detected by a simple nonlinear criterion, based on second moments of the correlation matrix constructed from pseudospin operators. This analysis reveals previously unexplored regimes where non-Gaussian measurements are shown to be more effective than Gaussian ones to witness steering of Gaussian states in the presence of local noise. We further consider an alternative set of pseudospin observables, whose expectation value can be expressed more compactly in terms of Wigner functions for all two-mode Gaussian states. However, according to the adopted criterion, these observables are found to be always less sensitive than conventional Gaussian observables for steering detection. Finally, we investigate continuous-variable Werner states, which are non-Gaussian mixtures of Gaussian states, and find that pseudospin measurements are always more effective than Gaussian ones to reveal their steerability. Our results provide useful insights on the role of non-Gaussian measurements in characterizing quantum correlations of Gaussian and non-Gaussian states of continuous-variable quantum systems.

  1. Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.

    PubMed

    Ginzburg, D; Mann, A

    2014-03-10

    A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.

  2. Laser plasma x-ray line spectra fitted using the Pearson VII function

    NASA Astrophysics Data System (ADS)

    Michette, A. G.; Pfauntsch, S. J.

    2000-05-01

    The Pearson VII function, which is more general than the Gaussian, Lorentzian and other profiles, is used to fit the x-ray spectral lines produced in a laser-generated plasma, instead of the more usual, but computationally expensive, Voigt function. The mean full-width half-maximum of the fitted lines is 0.102+/-0.014 nm, entirely consistent with the value expected from geometrical considerations, and the fitted line profiles are generally inconsistent with being either Lorentzian or Gaussian.

  3. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  4. Lower bounds to energies for cusped-gaussian wavefunctions. [hydrogen atom ground state

    NASA Technical Reports Server (NTRS)

    Eaves, J. O.; Walsh, B. C.; Steiner, E.

    1974-01-01

    Calculations for the ground states of H, He, and Be, conducted by Steiner and Sykes (1972), show that the inclusion of a very small number of cusp functions can lead to a substantial enhancement of the quality of the Gaussian basis used in molecular wavefunction computations. The properties of the cusped-Gaussian basis are investigated by a calculation of lower bounds concerning the ground state energy of the hydrogen atom.

  5. Restoration algorithms for imaging through atmospheric turbulence

    DTIC Science & Technology

    2017-02-18

    the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors

  6. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  7. Propagation of Bessel-Gaussian beams through a double-apertured fractional Fourier transform optical system.

    PubMed

    Tang, Bin; Jiang, Chun; Zhu, Haibin

    2012-08-01

    Based on the scalar diffraction theory and the fact that a hard-edged aperture function can be expanded into a finite sum of complex Gaussian functions, an approximate analytical solution for Bessel-Gaussian (BG) beams propagating through a double-apertured fractional Fourier transform (FrFT) system is derived in the cylindrical coordinate. By using the approximate analytical formulas, the propagation properties of BG beams passing through a double-apertured FrFT optical system have been studied in detail by some typical numerical examples. The results indicate that the double-apertured FrFT optical system provides a convenient way for controlling the properties of the BG beams by properly choosing the optical parameters.

  8. Assessment of refractive index of pigments by Gaussian fitting of light backscattering data in context of the liquid immersion method.

    PubMed

    Niskanen, Ilpo; Peiponen, Kai-Erik; Räty, Jukka

    2010-05-01

    Using a multifunction spectrophotometer, the refractive index of a pigment can be estimated by measuring the backscattering of light from the pigment in immersion liquids having slightly different refractive indices. A simple theoretical Gaussian function model related to the optical path distribution is introduced that makes it possible to describe quantitatively the backscattering signal from transparent pigments using a set of only a few immersion liquids. With the aid of the data fitting by a Gaussian function, the measurement time of the refractive index of the pigment can be reduced. The backscattering measurement technique is suggested to be useful in industrial measurement environments of pigments.

  9. Fuzzy Logic Controller Design for A Robot Grasping System with Different Membership Functions

    NASA Astrophysics Data System (ADS)

    Ahmad, Hamzah; Razali, Saifudin; Rusllim Mohamed, Mohd

    2013-12-01

    This paper investigates the effects of the membership function to the object grasping for a three fingered gripper system. The performance of three famously used membership functions is compared to identify their behavior in lifting a defined object shape. MATLAB Simulink and SimMechanics toolboxes are used to examine the performance. Our preliminary results proposed that the Gaussian membership function surpassed the two other membership functions; triangular and trapezoid memberships especially in the context of firmer grasping and less time consumption during operations. Therefore, Gaussian membership function could be the best solution when time consumption and firmer grasp are considered.

  10. Inversion of Airborne Electromagnetic Data: Application to Oil Sands Exploration

    NASA Astrophysics Data System (ADS)

    Cristall, J.; Farquharson, C. G.; Oldenburg, D. W.

    2004-05-01

    In general, three-dimensional inversion of airborne electromagnetic data for models of the conductivity variation in the Earth is currently impractical because of the large amount of computation time that it requires. At the other extreme, one-dimensional imaging techniques based on transforming the observed data as a function of measurement time or frequency at each location to values of conductivity as a function of depth are very fast. Such techniques can provide an image that, in many circumstances, is a fair, qualitative representation of the subsurface. However, this is not the same as a model that is known to reproduce the observations to a level considered appropriate for the noise in the data. This makes it hard to assess the quality and reliability of the images produced by the transform techniques until other information such as bore-hole logs is obtained. A compromise between these two interpretation strategies is to retain the approximation of a one-dimensional variation of conductivity beneath each observation location, but to invert the corresponding data as functions of time or frequency, taking advantage of all available aspects of inversion methodology. For example, using an automatic method such as the GCV or L-curve criteria for determining how well to fit a set of data when the actual amount of noise is not known, even when there are clear multi-dimensional effects in the data; using something other than a sum-of-squares measure for the misfit, for example the Huber M-measure, which affords a robust fit to data that contain non-Gaussian noise; and using an l1-norm or similar measure of model structure that enables piecewise constant, blocky models to be constructed. These features, as well as the basic concepts of minimum-structure inversion, result in a flexible and powerful interpretation procedure that, because of the one-dimensional approximation, is sufficiently rapid to be a viable alternative to the imaging techniques presently in use. We provide an example that involves the interpretation of an airborne time-domain electromagnetic data-set from an oil sands exploration project in Alberta. The target is the layer that potentially contains oil sands. This layer is relatively resistive, with its resistivity increasing with increasing hydrocarbon content, and is sandwiched between two more conductive layers. This is quite different from the classical electromagnetic geophysics scenario of looking for a conductive mineral deposit in resistive shield rocks. However, inverting the data enabled the depth, thickness and resistivity of the target layer to be well determined. As a consequence, it is concluded that airborne electromagnetic surveys, when combined with inversion procedures, can be a very cost-effective way of mapping even fairly subtle conductivity variations over large areas.

  11. Trajectory following and stabilization control of fully actuated AUV using inverse kinematics and self-tuning fuzzy PID.

    PubMed

    Hammad, Mohanad M; Elshenawy, Ahmed K; El Singaby, M I

    2017-01-01

    In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment.

  12. Trajectory following and stabilization control of fully actuated AUV using inverse kinematics and self-tuning fuzzy PID

    PubMed Central

    Elshenawy, Ahmed K.; El Singaby, M.I.

    2017-01-01

    In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment. PMID:28683071

  13. The spatial sensitivity of Sp converted waves—scattered-wave kernels and their applications to receiver-function migration and inversion

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2018-03-01

    We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.

  14. Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media

    NASA Astrophysics Data System (ADS)

    Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen

    2018-06-01

    A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.

  15. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  16. Effects of blood pressure and sex on the change of wave reflection: evidence from Gaussian fitting method for radial artery pressure waveform.

    PubMed

    Liu, Chengyu; Zhao, Lina; Liu, Changchun

    2014-01-01

    An early return of the reflected component in the arterial pulse has been recognized as an important indicator of cardiovascular risk. This study aimed to determine the effects of blood pressure and sex factor on the change of wave reflection using Gaussian fitting method. One hundred and ninety subjects were enrolled. They were classified into four blood pressure categories based on the systolic blood pressures (i.e., ≤ 110, 111-120, 121-130 and ≥ 131 mmHg). Each blood pressure category was also stratified for sex factor. Electrocardiogram (ECG) and radial artery pressure waveforms (RAPW) signals were recorded for each subject. Ten consecutive pulse episodes from the RAPW signal were extracted and normalized. Each normalized pulse episode was fitted by three Gaussian functions. Both the peak position and peak height of the first and second Gaussian functions, as well as the peak position interval and peak height ratio, were used as the evaluation indices of wave reflection. Two-way ANOVA results showed that with the increased blood pressure, the peak position of the second Gaussian significantly shorten (P < 0.01), the peak height of the first Gaussian significantly decreased (P < 0.01) and the peak height of the second Gaussian significantly increased (P < 0.01), inducing the significantly decreased peak position interval and significantly increased peak height ratio (both P < 0.01). Sex factor had no significant effect on all evaluation indices (all P > 0.05). Moreover, the interaction between sex and blood pressure factors also had no significant effect on all evaluation indices (all P > 0.05). These results showed that blood pressure has significant effect on the change of wave reflection when using the recently developed Gaussian fitting method, whereas sex has no significant effect. The results also suggested that the Gaussian fitting method could be used as a new approach for assessing the arterial wave reflection.

  17. Metasurface-assisted orbital angular momentum carrying Bessel-Gaussian Laser: proposal and simulation.

    PubMed

    Zhou, Nan; Wang, Jian

    2018-05-23

    Bessel-Gaussian beams have distinct properties of suppressed diffraction divergence and self-reconstruction. In this paper, we propose and simulate metasurface-assisted orbital angular momentum (OAM) carrying Bessel-Gaussian laser. The laser can be regarded as a Fabry-Perot cavity formed by one partially transparent output plane mirror and the other metasurface-based reflector mirror. The gain medium of Nd:YVO 4 enables the lasing wavelength at 1064 nm with a 808 nm laser serving as the pump. The sub-wavelength structure of metasurface facilitates flexible spatial light manipulation. The compact metasurface-based reflector provides combined phase functions of an axicon and a spherical mirror. By appropriately selecting the size of output mirror and inserting mode-selection element in the laser cavity, different orders of OAM-carrying Bessel-Gaussian lasing modes are achievable. The lasing Bessel-Gaussian 0 , Bessel-Gaussian 01 + , Bessel-Gaussian 02 + and Bessel-Gaussian 03 + modes have high fidelities of ~0.889, ~0.889, ~0.881 and ~0.879, respectively. The metasurface fabrication tolerance and the dependence of threshold power and output lasing power on the length of gain medium, beam radius of pump and transmittance of output mirror are also discussed. The obtained results show successful implementation of metasurface-assisted OAM-carrying Bessel-Gaussian laser with favorable performance. The metasurface-assisted OAM-carrying Bessel-Gaussian laser may find wide OAM-enabled communication and non-communication applications.

  18. Photonic generation of FCC-compliant UWB pulses based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion

    NASA Astrophysics Data System (ADS)

    Mu, Hongqian; Wang, Muguang; Tang, Yu; Zhang, Jing; Jian, Shuisheng

    2018-03-01

    A novel scheme for the generation of FCC-compliant UWB pulse is proposed based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion. The modified Gaussian quadruplet is synthesized based on linear sum of a broad Gaussian pulse and two narrow Gaussian pulses with the same pulse-width and amplitude peak. Within specific parameter range, FCC-compliant UWB with spectral power efficiency of higher than 39.9% can be achieved. In order to realize the designed waveform, a UWB generator based on spectral shaping and incoherent wavelength-to-time mapping is proposed. The spectral shaper is composed of a Gaussian filter and a programmable filter. Single-mode fiber functions as both dispersion device and transmission medium. Balanced photodetection is employed to combine linearly the broad Gaussian pulse and two narrow Gaussian pulses, and at same time to suppress pulse pedestals that result in low-frequency components. The proposed UWB generator can be reconfigured for UWB doublet by operating the programmable filter as a single-band Gaussian filter. The feasibility of proposed UWB generator is demonstrated experimentally. Measured UWB pulses match well with simulation results. FCC-compliant quadruplet with 10-dB bandwidth of 6.88-GHz, fractional bandwidth of 106.8% and power efficiency of 51% is achieved.

  19. Second order Pseudo-gaussian shaper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beche, Jean-Francois

    2002-11-22

    The purpose of this document is to provide a calculus spreadsheet for the design of second-order pseudo-gaussian shapers. A very interesting reference is given by C.H. Mosher ''Pseudo-Gaussian Transfer Functions with Superlative Recovery'', IEEE TNS Volume 23, p. 226-228 (1976). Fred Goulding and Don Landis have studied the structure of those filters and their implementation and this document will outline the calculation leading to the relation between the coefficients of the filter. The general equation of the second order pseudo-gaussian filter is: f(t) = P{sub 0} {center_dot} e{sup -3kt} {center_dot} sin{sup 2}(kt). The parameter k is a normalization factor.

  20. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  1. Quantum non-Gaussianity and quantification of nonclassicality

    NASA Astrophysics Data System (ADS)

    Kühn, B.; Vogel, W.

    2018-05-01

    The algebraic quantification of nonclassicality, which naturally arises from the quantum superposition principle, is related to properties of regular nonclassicality quasiprobabilities. The latter are obtained by non-Gaussian filtering of the Glauber-Sudarshan P function. They yield lower bounds for the degree of nonclassicality. We also derive bounds for convex combinations of Gaussian states for certifying quantum non-Gaussianity directly from the experimentally accessible nonclassicality quasiprobabilities. Other quantum-state representations, such as s -parametrized quasiprobabilities, insufficiently indicate or even fail to directly uncover detailed information on the properties of quantum states. As an example, our approach is applied to multi-photon-added squeezed vacuum states.

  2. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  3. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  4. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  5. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  6. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  7. Extended wave-packet model to calculate energy-loss moments of protons in matter

    NASA Astrophysics Data System (ADS)

    Archubi, C. D.; Arista, N. R.

    2017-12-01

    In this work we introduce modifications to the wave-packet method proposed by Kaneko to calculate the energy-loss moments of a projectile traversing a target which is represented in terms of Gaussian functions for the momentum distributions of electrons in the atomic shells. These modifications are introduced using the Levine and Louie technique to take into account the energy gaps corresponding to the different atomic levels of the target. We use the extended wave-packet model to evaluate the stopping power, the energy straggling, the inverse mean free path, and the ionization cross sections for protons in several targets, obtaining good agreements for all these quantities on an extensive energy range that covers low-, intermediate-, and high-energy regions. The extended wave-packet model proposed here provides a method to calculate in a very straightforward way all the significant terms of the inelastic interaction of light ions with any element of the periodic table.

  8. Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Steinwolf, Alexander

    2005-01-01

    The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.

  9. Recording from two neurons: second-order stimulus reconstruction from spike trains and population coding.

    PubMed

    Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R

    2010-10-01

    We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.

  10. Disappearance of Anisotropic Intermittency in Large-amplitude MHD Turbulence and Its Comparison with Small-amplitude MHD Turbulence

    NASA Astrophysics Data System (ADS)

    Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua

    2018-03-01

    Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.

  11. Separation of the low-frequency atmospheric variability into non-Gaussian multidimensional sources by Independent Subspace Analysis

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Ribeiro, Andreia

    2016-04-01

    An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.

  12. On the evaluation of derivatives of Gaussian integrals

    NASA Technical Reports Server (NTRS)

    Helgaker, Trygve; Taylor, Peter R.

    1992-01-01

    We show that by a suitable change of variables, the derivatives of molecular integrals over Gaussian-type functions required for analytic energy derivatives can be evaluated with significantly less computational effort than current formulations. The reduction in effort increases with the order of differentiation.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  14. Evolution of CMB spectral distortion anisotropies and tests of primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Dimastrogiovanni, Emanuela; Amin, Mustafa A.; Kamionkowski, Marc

    2017-04-01

    Anisotropies in distortions to the frequency spectrum of the cosmic microwave background (CMB) can be created through spatially varying heating processes in the early Universe. For instance, the dissipation of small-scale acoustic modes does create distortion anisotropies, in particular for non-Gaussian primordial perturbations. In this work, we derive approximations that allow describing the associated distortion field. We provide a systematic formulation of the problem using Fourier-space window functions, clarifying and generalizing previous approximations. Our expressions highlight the fact that the amplitudes of the spectral-distortion fluctuations induced by non-Gaussianity depend also on the homogeneous value of those distortions. Absolute measurements are thus required to obtain model-independent distortion constraints on primordial non-Gaussianity. We also include a simple description for the evolution of distortions through photon diffusion, showing that these corrections can usually be neglected. Our formulation provides a systematic framework for computing higher order correlation functions of distortions with CMB temperature anisotropies and can be extended to describe correlations with polarization anisotropies.

  15. Four tails problems for dynamical collapse theories

    NASA Astrophysics Data System (ADS)

    McQueen, Kelvin J.

    2015-02-01

    The primary quantum mechanical equation of motion entails that measurements typically do not have determinate outcomes, but result in superpositions of all possible outcomes. Dynamical collapse theories (e.g. GRW) supplement this equation with a stochastic Gaussian collapse function, intended to collapse the superposition of outcomes into one outcome. But the Gaussian collapses are imperfect in a way that leaves the superpositions intact. This is the tails problem. There are several ways of making this problem more precise. But many authors dismiss the problem without considering the more severe formulations. Here I distinguish four distinct tails problems. The first (bare tails problem) and second (structured tails problem) exist in the literature. I argue that while the first is a pseudo-problem, the second has not been adequately addressed. The third (multiverse tails problem) reformulates the second to account for recently discovered dynamical consequences of collapse. Finally the fourth (tails problem dilemma) shows that solving the third by replacing the Gaussian with a non-Gaussian collapse function introduces new conflict with relativity theory.

  16. Ionospheric scintillation by a random phase screen Spectral approach

    NASA Technical Reports Server (NTRS)

    Rufenach, C. L.

    1975-01-01

    The theory developed by Briggs and Parkin, given in terms of an anisotropic gaussian correlation function, is extended to a spectral description specified as a continuous function of spatial wavenumber with an intrinsic outer scale as would be expected from a turbulent medium. Two spectral forms were selected for comparison: (1) a power-law variation in wavenumber with a constant three-dimensional index equal to 4, and (2) Gaussian spectral variation. The results are applied to the F-region ionosphere with an outer-scale wavenumber of 2 per km (approximately equal to the Fresnel wavenumber) for the power-law variation, and 0.2 per km for the Gaussian spectral variation. The power-law form with a small outer-scale wavenumber is consistent with recent F-region in-situ measurements, whereas the gaussian form is mathematically convenient and, hence, mostly used in the previous developments before the recent in-situ measurements. Some comparison with microwave scintillation in equatorial areas is made.

  17. Design and implementation of an optical Gaussian noise generator

    NASA Astrophysics Data System (ADS)

    Za~O, Leonardo; Loss, Gustavo; Coelho, Rosângela

    2009-08-01

    A design of a fast and accurate optical Gaussian noise generator is proposed and demonstrated. The noise sample generation is based on the Box-Muller algorithm. The functions implementation was performed on a high-speed Altera Stratix EP1S25 field-programmable gate array (FPGA) development kit. It enabled the generation of 150 million 16-bit noise samples per second. The Gaussian noise generator required only 7.4% of the FPGA logic elements, 1.2% of the RAM memory, 0.04% of the ROM memory, and a laser source. The optical pulses were generated by a laser source externally modulated by the data bit samples using the frequency-shift keying technique. The accuracy of the noise samples was evaluated for different sequences size and confidence intervals. The noise sample pattern was validated by the Bhattacharyya distance (Bd) and the autocorrelation function. The results showed that the proposed design of the optical Gaussian noise generator is very promising to evaluate the performance of optical communications channels with very low bit-error-rate values.

  18. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  19. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  20. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  1. Large-scale 3D galaxy correlation function and non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele

    We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less

  2. Comparison between photon annihilation-then-creation and photon creation-then-annihilation thermal states: Non-classical and non-Gaussian properties

    NASA Astrophysics Data System (ADS)

    Xu, Xue-Xiang; Yuan, Hong-Chun; Wang, Yan

    2014-07-01

    We investigate the nonclassical properties of arbitrary number photon annihilation-then-creation operation (AC) and creation-then-annihilation operation (CA) to the thermal state (TS), whose normalization factors are related to the polylogarithm function. Then we compare their quantum characters, such as photon number distribution, average photon number, Mandel Q-parameter, purity and the Wigner function. Because of the noncommutativity between the annihilation operator and the creation operator, the ACTS and the CATS have different nonclassical properties. It is found that nonclassical properties are exhibited more strongly after AC than after CA. In addition we also examine their non-Gaussianity. The result shows that the ACTS can present a slightly bigger non-Gaussianity than the CATS.

  3. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    NASA Astrophysics Data System (ADS)

    Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.

    2014-12-01

    In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.

  4. An effective introduction to structural crystallography using 1D Gaussian atoms

    NASA Astrophysics Data System (ADS)

    Smith, Emily; Evans, Gwyndaf; Foadi, James

    2017-11-01

    The most important quantitative aspects of computational structural crystallography can be introduced in a satisfactory way using 1D truncated and periodic Gaussian functions to represent the atoms in a crystal lattice. This paper describes in detail and demonstrates 1D structural crystallography starting with the definition of such truncated Gaussians. The availability of the computer programme CRONE makes possible the repetition of the examples provided in the paper as well as the creation of new ones.

  5. Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2012-06-01

    A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.

  6. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  7. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  8. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  9. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  10. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  11. Detection of Fiber Layer-Up Lamination Order of CFRP Composite Using Thermal-Wave Radar Imaging

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Liu, Junyan; Liu, Yang; Wang, Yang; Gong, Jinlong

    2016-09-01

    In this paper, thermal-wave radar imaging (TWRI) is used as a nondestructive inspection method to evaluate carbon-fiber-reinforced-polymer (CFRP) composite. An inverse methodology that combines TWRI with numerical optimization technique is proposed to determine the fiber layer-up lamination sequences of anisotropic CFRP composite. A 7-layer CFRP laminate [0°/45°/90°/0°]_{{s}} is heated by a chirp-modulated Gaussian laser beam, and then finite element method (FEM) is employed to calculate the temperature field of CFRP laminates. The phase based on lock-in correlation between reference chirp signal and the thermal-wave signal is performed to obtain the phase image of TWRI, and the least square method is applied to reconstruct the cost function that minimizes the square of the difference between the phase of TWRI inspection and numerical calculation. A hybrid algorithm that combines the simulation annealing with Nelder-Mead simplex research method is employed to solve the reconstructed cost function and find the global optimal solution of the layer-up sequences of CFRP composite. The result shows the feasibility of estimating the fiber layer-up lamination sequences of CFRP composite with optimal discrete and constraint conditions.

  12. The Development of a Noncontact Letter Input Interface “Fingual” Using Magnetic Dataset

    NASA Astrophysics Data System (ADS)

    Fukushima, Taishi; Miyazaki, Fumio; Nishikawa, Atsushi

    We have newly developed a noncontact letter input interface called “Fingual”. Fingual uses a glove mounted with inexpensive and small magnetic sensors. Using the glove, users can input letters to form the finger alphabets, a kind of sign language. The proposed method uses some dataset which consists of magnetic field and the corresponding letter information. In this paper, we show two recognition methods using the dataset. First method uses Euclidean norm, and second one additionally uses Gaussian function as a weighting function. Then we conducted verification experiments for the recognition rate of each method in two situations. One of the situations is that subjects used their own dataset; the other is that they used another person's dataset. As a result, the proposed method could recognize letters with a high rate in both situations, even though it is better to use their own dataset than to use another person's dataset. Though Fingual needs to collect magnetic dataset for each letter in advance, its feature is the ability to recognize letters without the complicated calculations such as inverse problems. This paper shows results of the recognition experiments, and shows the utility of the proposed system “Fingual”.

  13. Parsing the roles of the frontal lobes and basal ganglia in task control using multivoxel pattern analysis

    PubMed Central

    Kehagia, Angie A.; Ye, Rong; Joyce, Dan W.; Doyle, Orla M.; Rowe, James B.; Robbins, Trevor W.

    2017-01-01

    Cognitive control has traditionally been associated with the prefrontal cortex, based on observations of deficits in patients with frontal lesions. However, evidence from patients with Parkinson’s disease (PD) indicates that subcortical regions also contribute to control under certain conditions. We scanned 17 healthy volunteers while they performed a task switching paradigm that previously dissociated performance deficits arising from frontal lesions in comparison with PD, as a function of the abstraction of the rules that are switched. From a multivoxel pattern analysis by Gaussian Process Classification (GPC), we then estimated the forward (generative) model to infer regional patterns of activity that predict Switch / Repeat behaviour between rule conditions. At 1000 permutations, Switch / Repeat classification accuracy for concrete rules was significant in the basal ganglia, but at chance in the frontal lobe. The inverse pattern was obtained for abstract rules, whereby the conditions were successfully discriminated in the frontal lobe but not in the basal ganglia. This double dissociation highlights the difference between cortical and subcortical contributions to cognitive control and demonstrates the utility of multivariate approaches in investigations of functions that rely on distributed and overlapping neural substrates. PMID:28387585

  14. On the robustness of the q-Gaussian family

    NASA Astrophysics Data System (ADS)

    Sicuro, Gabriele; Tempesta, Piergiulio; Rodríguez, Antonio; Tsallis, Constantino

    2015-12-01

    We introduce three deformations, called α-, β- and γ-deformation respectively, of a N-body probabilistic model, first proposed by Rodríguez et al. (2008), having q-Gaussians as N → ∞ limiting probability distributions. The proposed α- and β-deformations are asymptotically scale-invariant, whereas the γ-deformation is not. We prove that, for both α- and β-deformations, the resulting deformed triangles still have q-Gaussians as limiting distributions, with a value of q independent (dependent) on the deformation parameter in the α-case (β-case). In contrast, the γ-case, where we have used the celebrated Q-numbers and the Gauss binomial coefficients, yields other limiting probability distribution functions, outside the q-Gaussian family. These results suggest that scale-invariance might play an important role regarding the robustness of the q-Gaussian family.

  15. Strength functions, entropies, and duality in weakly to strongly interacting fermionic systems.

    PubMed

    Angom, D; Ghosh, S; Kota, V K B

    2004-01-01

    We revisit statistical wave function properties of finite systems of interacting fermions in the light of strength functions and their participation ratio and information entropy. For weakly interacting fermions in a mean-field with random two-body interactions of increasing strength lambda, the strength functions F(k) (E) are well known to change, in the regime where level fluctuations follow Wigner's surmise, from Breit-Wigner to Gaussian form. We propose an ansatz for the function describing this transition which we use to investigate the participation ratio xi(2) and the information entropy S(info) during this crossover, thereby extending the known behavior valid in the Gaussian domain into much of the Breit-Wigner domain. Our method also allows us to derive the scaling law lambda(d) approximately 1/sqrt[m] ( m is number of fermions) for the duality point lambda= lambda(d), where F(k) (E), xi(2), and S(info) in both the weak ( lambda=0 ) and strong mixing ( lambda= infinity ) basis coincide. As an application, the ansatz function for strength functions is used in describing the Breit-Wigner to Gaussian transition seen in neutral atoms CeI to SmI with valence electrons changing from 4 to 8.

  16. An automated method for depth-dependent crustal anisotropy detection with receiver function

    NASA Astrophysics Data System (ADS)

    Licciardi, Andrea; Piana Agostinetti, Nicola

    2015-04-01

    Crustal seismic anisotropy can be generated by a variety of geological factors (e.g. alignment of minerals/cracks, presence of fluids etc...). In the case of transversely isotropic media approximation, information about strength and orientation of the anisotropic symmetry axis (including dip) can be extracted from the analysis of P-to-S conversions by means of teleseismic receiver functions (RF). Classically this has been achieved through probabilistic inversion encoding a forward solver for anisotropic media. This approach strongly relies on apriori choices regarding Earth's crust parameterization and velocity structure, requires an extensive knowledge of the RF method and involves time consuming trial and error steps. We present an automated method for reducing the non-uniqueness in this kind of inversions and for retrieving depth-dependent seismic anisotropy parameters in the crust with a resolution of some hundreds of meters. The method involves a multi-frequency approach (for better absolute Vs determination) and the decomposition of the RF data-set in its azimuthal harmonics (to separate the effects of isotropic and anisotropic component). A first inversion of the isotropic component (Zero-order harmonics) by means of a Reversible jump Markov Chain Monte Carlo (RjMCMC) provides the posterior probability distribution for the position of the velocity jumps at depth, from which information on the number of layers and the S-wave velocity structure below a broadband seismic station can be extracted. This information together with that encoded in the first order harmonic is jointly used in an automated way to: (1) determine the number of anisotropic layers and their approximate position at depth, and (2) narrow the search boundaries for layer thickness and S-wave velocity. Finaly, an inversion is carried out with a Neighbourhood Algorithm (NA), where the free parameters are represented by the anisotropic structure beneath the seismic station. We tested the method against synthetic RF with correlated Gaussian noise to investigate the resolution power for multiple and thin (1-5 km) anisotropic layers in the crust. The algorithm correctly retrieves the true models for the number and the position of the anisotropic layers, their strength and orientation of the anisotropic symmetry axis, although the trend direction is better constrained than the dip angle. The method is then applied to a real data-set and the results compared with previous RF studies.

  17. Data-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators.

    PubMed

    Xu, Wenjun; Chen, Jie; Lau, Henry Y K; Ren, Hongliang

    2017-09-01

    Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. The tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in minimally invasive surgery because of its enhanced maneuverability in torturous environments. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy. To account for the system nonlinearities, we applied a data driven approach to encode the system inverse kinematics. Three regression methods: extreme learning machine (ELM), Gaussian mixture regression (GMR) and K-nearest neighbors regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position states to the control inputs. The performance of the three algorithms was evaluated both in simulation and physical trajectory tracking experiments. KNNR performed the best in the tracking experiments, with the lowest RMSE of 2.1275 mm. The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the tendon driven flexible manipulator. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Short-term prediction of chaotic time series by using RBF network with regression weights.

    PubMed

    Rojas, I; Gonzalez, J; Cañas, A; Diaz, A F; Rojas, F J; Rodriguez, M

    2000-10-01

    We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.

  19. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  20. Fresnel zone plate with apodized aperture for hard X-ray Gaussian beam optics.

    PubMed

    Takeuchi, Akihisa; Uesugi, Kentaro; Suzuki, Yoshio; Itabashi, Seiichi; Oda, Masatoshi

    2017-05-01

    Fresnel zone plates with apodized apertures [apodization FZPs (A-FZPs)] have been developed to realise Gaussian beam optics in the hard X-ray region. The designed zone depth of A-FZPs gradually decreases from the center to peripheral regions. Such a zone structure forms a Gaussian-like smooth-shouldered aperture function which optically behaves as an apodization filter and produces a Gaussian-like focusing spot profile. Optical properties of two types of A-FZP, i.e. a circular type and a one-dimensional type, have been evaluated by using a microbeam knife-edge scan test, and have been carefully compared with those of normal FZP optics. Advantages of using A-FZPs are introduced.

  1. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  2. Gaussian representation of high-intensity focused ultrasound beams.

    PubMed

    Soneson, Joshua E; Myers, Matthew R

    2007-11-01

    A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.

  3. Generation of Ince-Gaussian beams in highly efficient, nanosecond Cr, Nd:YAG microchip lasers

    NASA Astrophysics Data System (ADS)

    Dong, J.; Ma, J.; Ren, Y. Y.; Xu, G. Z.; Kaminskii, A. A.

    2013-08-01

    Direct generation of higher-order Ince-Gaussian (IG) beams from laser-diode end-pumped Cr, Nd:YAG self-Q-switched microchip lasers was achieved with high efficiency and high repetition rate. An average output power of over 2 W was obtained at an absorbed pump power of 8.2 W a corresponding optical-to-optical efficiency of 25% was achieved. Various IG modes with nanosecond pulse width and peak power of over 2 kW were obtained in laser-diode pumped Cr, Nd:YAG microchip lasers under different pump power levels by applying a tilted, large area pump beam. The effect of the inversion population distribution induced by the tilted pump beam and nonlinear absorption of Cr4+-ions for different pump power levels on the oscillation of higher-order IG modes in Cr, Nd:YAG microchip lasers is addressed. The higher-order IG mode oscillation has a great influence on the laser performance of Cr, Nd:YAG microchip lasers.

  4. Anomalous time delays and quantum weak measurements in optical micro-resonators

    PubMed Central

    Asano, M.; Bliokh, K. Y.; Bliokh, Y. P.; Kofman, A. G.; Ikuta, R.; Yamamoto, T.; Kivshar, Y. S.; Yang, L.; Imoto, N.; Özdemir, Ş.K.; Nori, F.

    2016-01-01

    Quantum weak measurements, wavepacket shifts and optical vortices are universal wave phenomena, which originate from fine interference of multiple plane waves. These effects have attracted considerable attention in both classical and quantum wave systems. Here we report on a phenomenon that brings together all the above topics in a simple one-dimensional scalar wave system. We consider inelastic scattering of Gaussian wave packets with parameters close to a zero of the complex scattering coefficient. We demonstrate that the scattered wave packets experience anomalously large time and frequency shifts in such near-zero scattering. These shifts reveal close analogies with the Goos–Hänchen beam shifts and quantum weak measurements of the momentum in a vortex wavefunction. We verify our general theory by an optical experiment using the near-zero transmission (near-critical coupling) of Gaussian pulses propagating through a nano-fibre with a side-coupled toroidal micro-resonator. Measurements demonstrate the amplification of the time delays from the typical inverse-resonator-linewidth scale to the pulse-duration scale. PMID:27841269

  5. Antimicrobial peptides and induced membrane curvature: geometry, coordination chemistry, and molecular engineering

    PubMed Central

    Schmidt, Nathan W.; Wong, Gerard C. L.

    2013-01-01

    Short cationic, amphipathic antimicrobial peptides are multi-functional molecules that have roles in host defense as direct microbicides and modulators of the immune response. While a general mechanism of microbicidal activity involves the selective disruption and permeabilization of cell membranes, the relationships between peptide sequence and membrane activity are still under investigation. Here, we review the diverse functions that AMPs collectively have in host defense, and show that these functions can be multiplexed with a membrane mechanism of activity derived from the generation of negative Gaussian membrane curvature. As AMPs preferentially generate this curvature in model bacterial cell membranes, the selective generation of negative Gaussian curvature provides AMPs with a broad mechanism to target microbial membranes. The amino acid constraints placed on AMPs by the geometric requirement to induce negative Gaussian curvature are consistent with known AMP sequences. This ‘saddle-splay curvature selection rule’ is not strongly restrictive so AMPs have significant compositional freedom to multiplex membrane activity with other useful functions. The observation that certain proteins involved in cellular processes which require negative Gaussian curvature contain domains with similar motifs as AMPs, suggests this rule may be applicable to other curvature-generating proteins. Since our saddle-splay curvature design rule is based upon both a mechanism of activity and the existing motifs of natural AMPs, we believe it will assist the development of synthetic antimicrobials. PMID:24778573

  6. Effect of Coulomb friction on orientational correlation and velocity distribution functions in a sheared dilute granular gas.

    PubMed

    Gayen, Bishakhdatta; Alam, Meheboob

    2011-08-01

    From particle simulations of a sheared frictional granular gas, we show that the Coulomb friction can have dramatic effects on orientational correlation as well as on both the translational and angular velocity distribution functions even in the Boltzmann (dilute) limit. The dependence of orientational correlation on friction coefficient (μ) is found to be nonmonotonic, and the Coulomb friction plays a dual role of enhancing or diminishing the orientational correlation, depending on the value of the tangential restitution coefficient (which characterizes the roughness of particles). From the sticking limit (i.e., with no sliding contact) of rough particles, decreasing the Coulomb friction is found to reduce the density and spatial velocity correlations which, together with diminished orientational correlation for small enough μ, are responsible for the transition from non-gaussian to gaussian distribution functions in the double limit of small friction (μ→0) and nearly elastic particles (e→1). This double limit in fact corresponds to perfectly smooth particles, and hence the maxwellian (gaussian) is indeed a solution of the Boltzmann equation for a frictional granular gas in the limit of elastic collisions and zero Coulomb friction at any roughness. The high-velocity tails of both distribution functions seem to follow stretched exponentials even in the presence of Coulomb friction, and the related velocity exponents deviate strongly from a gaussian with increasing friction.

  7. Non-Gaussian bias: insights from discrete density peaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch

    2013-09-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less

  8. Analysis of Flow and Transport in non-Gaussian Heterogeneous Formations Using a Generalized Sub-Gaussian Model

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Neuman, S. P.

    2016-12-01

    Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.

  9. Thermal Diagnostics with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory: A Validated Method for Differential Emission Measure Inversions

    NASA Astrophysics Data System (ADS)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.; Testa, P.; Chen, F.; Peter, H.; Malanushenko, A.

    2015-07-01

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magnetic flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.

  10. Empirical investigation into depth-resolution of Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, N.; Ogaya, X.

    2017-12-01

    We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.

  11. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. THERMAL DIAGNOSTICS WITH THE ATMOSPHERIC IMAGING ASSEMBLY ON BOARD THE SOLAR DYNAMICS OBSERVATORY: A VALIDATED METHOD FOR DIFFERENTIAL EMISSION MEASURE INVERSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magneticmore » flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.« less

  13. Wigner distribution function and entropy of the damped harmonic oscillator within the theory of the open quantum systems

    NASA Technical Reports Server (NTRS)

    Isar, Aurelian

    1995-01-01

    The harmonic oscillator with dissipation is studied within the framework of the Lindblad theory for open quantum systems. By using the Wang-Uhlenbeck method, the Fokker-Planck equation, obtained from the master equation for the density operator, is solved for the Wigner distribution function, subject to either the Gaussian type or the delta-function type of initial conditions. The obtained Wigner functions are two-dimensional Gaussians with different widths. Then a closed expression for the density operator is extracted. The entropy of the system is subsequently calculated and its temporal behavior shows that this quantity relaxes to its equilibrium value.

  14. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    PubMed

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  15. Progress in calculating the potential energy surface of H3+.

    PubMed

    Adamowicz, Ludwik; Pavanello, Michele

    2012-11-13

    The most accurate electronic structure calculations are performed using wave function expansions in terms of basis functions explicitly dependent on the inter-electron distances. In our recent work, we use such basis functions to calculate a highly accurate potential energy surface (PES) for the H(3)(+) ion. The functions are explicitly correlated Gaussians, which include inter-electron distances in the exponent. Key to obtaining the high accuracy in the calculations has been the use of the analytical energy gradient determined with respect to the Gaussian exponential parameters in the minimization of the Rayleigh-Ritz variational energy functional. The effective elimination of linear dependences between the basis functions and the automatic adjustment of the positions of the Gaussian centres to the changing molecular geometry of the system are the keys to the success of the computational procedure. After adiabatic and relativistic corrections are added to the PES and with an effective accounting of the non-adiabatic effects in the calculation of the rotational/vibrational states, the experimental H(3)(+) rovibrational spectrum is reproduced at the 0.1 cm(-1) accuracy level up to 16,600 cm(-1) above the ground state.

  16. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  17. Anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field: two-loop approximation.

    PubMed

    Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L

    2005-01-01

    The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.

  18. A novel Gaussian-Sinc mixed basis set for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.

    2015-08-14

    A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less

  19. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  20. Performance assessment of density functional methods with Gaussian and Slater basis sets using 7σ orbital momentum distributions of N2O

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Pang, Wenning; Duffy, Patrick

    2012-12-01

    Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.

  1. Long-range corrected density functional theory with accelerated Hartree-Fock exchange integration using a two-Gaussian operator [LC-ωPBE(2Gau)].

    PubMed

    Song, Jong-Won; Hirao, Kimihiko

    2015-10-14

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.

  2. Generation of high-energy neutron beam by fragmentation of relativistic heavy nuclei

    NASA Astrophysics Data System (ADS)

    Yurevich, Vladimir

    2016-09-01

    The phenomenon of multiple production of neutrons in reactions with heavy nuclei induced by high-energy protons and light nuclei is analyzed using a Moving Source Model. The Lorentz transformation of the obtained neutron distributions is used to study the neutron characteristics in the inverse kinematics where relativistic heavy nuclei bombard a light-mass target. The neutron beam generated at 0∘has a Gaussian shape with a maximum at the energy of the projectile nucleons and an energy resolution σE/E < 4% above 6 GeV.

  3. Research and application of spectral inversion technique in frequency domain to improve resolution of converted PS-wave

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong

    2017-06-01

    Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.

  4. Novel palmprint representations for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Dong, Jiwen; Li, Jinping; Wang, Lei

    2015-02-01

    In this paper, we propose a novel palmprint recognition algorithm. Firstly, the palmprint images are represented by the anisotropic filter. The filters are built on Gaussian functions along one direction, and on second derivative of Gaussian functions in the orthogonal direction. Also, this choice is motivated by the optimal joint spatial and frequency localization of the Gaussian kernel. Therefore,they can better approximate the edge or line of palmprint images. A palmprint image is processed with a bank of anisotropic filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, subspace analysis is then applied to the feature vectors for dimension reduction as well as class separability. Experimental results on a public palmprint database show that the accuracy could be improved by the proposed novel representations, compared with Gabor.

  5. A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Kolb, J.; Lekic, V.

    2012-12-01

    Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301

  6. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy

    1993-01-01

    Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.

  7. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  8. The Form, and Some Robustness Properties of Integrated Distance Estimators for Linear Models, Applied to Some Published Data Sets.

    DTIC Science & Technology

    1982-06-01

    observation in our framework is the pair (y,x) with x considered given. The influence function for 52 at the Gaussian distribution with mean xB and variance...3/2 - (1+22)o2 2) 1+2x\\/2 x’) 2(3-9) (1+2X) This influence function is bounded in the residual y-xS, and redescends to an asymptote greater than...version of the influence function for B at the Gaussian distribution, given the x. and x, is defined as the normalized differenceJ (see Barnett and

  9. Cosine-Gaussian Schell-model sources.

    PubMed

    Mei, Zhangrong; Korotkova, Olga

    2013-07-15

    We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.

  10. Intermittent nature of solar wind turbulence near the Earth's bow shock: phase coherence and non-Gaussianity.

    PubMed

    Koga, D; Chian, A C-L; Miranda, R A; Rempel, E L

    2007-04-01

    The link between phase coherence and non-Gaussian statistics is investigated using magnetic field data observed in the solar wind turbulence near the Earth's bow shock. The phase coherence index Cphi, which characterizes the degree of phase correlation (i.e., nonlinear wave-wave interactions) among scales, displays a behavior similar to kurtosis and reflects a departure from Gaussianity in the probability density functions of magnetic field fluctuations. This demonstrates that nonlinear interactions among scales are the origin of intermittency in the magnetic field turbulence.

  11. Multidimensional Hermite-Gaussian quadrature formulae and their application to nonlinear estimation

    NASA Technical Reports Server (NTRS)

    Mcreynolds, S. R.

    1975-01-01

    A simplified technique is proposed for calculating multidimensional Hermite-Gaussian quadratures that involves taking the square root of a matrix by the Cholesky algorithm rather than computation of the eigenvectors of the matrix. Ways of reducing the dimension, number, and order of the quadratures are set forth. If the function f(x) under the integral sign is not well approximated by a low-order algebraic expression, the order of the quadrature may be reduced by factoring f(x) into an expression that is nearly algebraic and one that is Gaussian.

  12. Statistical description of turbulent transport for flux driven toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.

    2017-06-01

    A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.

  13. Fractional Fourier transform of truncated elliptical Gaussian beams.

    PubMed

    Du, Xinyue; Zhao, Daomu

    2006-12-20

    Based on the fact that a hard-edged elliptical aperture can be expanded approximately as a finite sum of complex Gaussian functions in tensor form, an analytical expression for an elliptical Gaussian beam (EGB) truncated by an elliptical aperture and passing through a fractional Fourier transform system is derived by use of vector integration. The approximate analytical results provide more convenience for studying the propagation and transformation of truncated EGBs than the usual way by using the integral formula directly, and the efficiency of numerical calculation is significantly improved.

  14. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  15. Lensing of the CMB: non-Gaussian aspects.

    PubMed

    Zaldarriaga, M

    2001-06-01

    We compute the small angle limit of the three- and four-point function of the cosmic microwave background (CMB) temperature induced by the gravitational lensing effect by the large-scale structure of the universe. We relate the non-Gaussian aspects presented in this paper with those in our previous studies of the lensing effects. We interpret the statistics proposed in previous work in terms of different configurations of the four-point function and show how they relate to the statistic that maximizes the S/N.

  16. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  17. Phase retrieval of images using Gaussian radial bases.

    PubMed

    Trahan, Russell; Hyland, David

    2013-12-20

    Here, the possibility of a noniterative solution to the phase retrieval problem is explored. A new look is taken at the phase retrieval problem that reveals that knowledge of a diffraction pattern's frequency components is enough to recover the image without projective iterations. This occurs when the image is formed using Gaussian bases that give the convenience of a continuous Fourier transform existing in a compact form where square pixels do not. The Gaussian bases are appropriate when circular apertures are used to detect the diffraction pattern because of their optical transfer functions, as discussed briefly. An algorithm is derived that is capable of recovering an image formed by Gaussian bases from only the Fourier transform's modulus, without background constraints. A practical example is shown.

  18. Inverse Function: Pre-Service Teachers' Techniques and Meanings

    ERIC Educational Resources Information Center

    Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.

    2018-01-01

    Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…

  19. Probabilistic analysis and fatigue damage assessment of offshore mooring system due to non-Gaussian bimodal tension processes

    NASA Astrophysics Data System (ADS)

    Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng

    2017-08-01

    Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.

  20. Gaussian-windowed frame based method of moments formulation of surface-integral-equation for extended apertures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shlivinski, A., E-mail: amirshli@ee.bgu.ac.il; Lomakin, V., E-mail: vlomakin@eng.ucsd.edu

    2016-03-01

    Scattering or coupling of electromagnetic beam-field at a surface discontinuity separating two homogeneous or inhomogeneous media with different propagation characteristics is formulated using surface integral equation, which are solved by the Method of Moments with the aid of the Gabor-based Gaussian window frame set of basis and testing functions. The application of the Gaussian window frame provides (i) a mathematically exact and robust tool for spatial-spectral phase-space formulation and analysis of the problem; (ii) a system of linear equations in a transmission-line like form relating mode-like wave objects of one medium with mode-like wave objects of the second medium; (iii)more » furthermore, an appropriate setting of the frame parameters yields mode-like wave objects that blend plane wave properties (as if solving in the spectral domain) with Green's function properties (as if solving in the spatial domain); and (iv) a representation of the scattered field with Gaussian-beam propagators that may be used in many large (in terms of wavelengths) systems.« less

  1. Log-amplitude variance and wave structure function: A new perspective for Gaussian beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, W.B.; Ricklin, J.C.; Andrews, L.C.

    1993-04-01

    Two naturally linked pairs of nondimensional parameters are identified such that either pair, together with wavelength and path length, completely specifies the diffractive propagation environment for a lowest-order paraxial Gaussian beam. Both parameter pairs are intuitive, and within the context of locally homogeneous and isotropic turbulence they reflect the long-recognized importance of the Fresnel zone size in the behavior of Rytov propagation statistics. These parameter pairs, called, respectively, the transmitter and receiver parameters, also provide a change in perspective in the analysis of optical turbulence effects on Gaussian beams by unifying a number of behavioral traits previously observed or predicted,more » and they create an environment in which the determination of limiting interrelationships between beam forms is especially simple. The fundamental nature of the parameter pairs becomes apparent in the derived analytical expressions for the log-amplitude variance and the wave structure function. These expressions verify general optical turbulence-related characteristics predicted for Gaussian beams, provide additional insights into beam-wave behavior, and are convenient tools for beam-wave analysis. 22 refs., 10 figs., 2 tabs.« less

  2. DC and analog/RF performance optimisation of source pocket dual work function TFET

    NASA Astrophysics Data System (ADS)

    Raad, Bhagwan Ram; Sharma, Dheeraj; Kondekar, Pravin; Nigam, Kaushal; Baronia, Sagar

    2017-12-01

    We investigate a systematic study of source pocket tunnel field-effect transistor (SP TFET) with dual work function of single gate material by using uniform and Gaussian doping profile in the drain region for ultra-low power high frequency high speed applications. For this, a n+ doped region is created near the source/channel junction to decrease the depletion width results in improvement of ON-state current. However, the dual work function of the double gate is used for enhancement of the device performance in terms of DC and analog/RF parameters. Further, to improve the high frequency performance of the device, Gaussian doping profile is considered in the drain region with different characteristic lengths which decreases the gate to drain capacitance and leads to drastic improvement in analog/RF figures of merit. Furthermore, the optimisation is performed with different concentrations for uniform and Gaussian drain doping profile and for various sectional length of lower work function of the gate electrode. Finally, the effect of temperature variation on the device performance is demonstrated.

  3. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  4. A Prediction Model for Functional Outcomes in Spinal Cord Disorder Patients Using Gaussian Process Regression.

    PubMed

    Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid

    2016-01-01

    Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.

  5. Effect of asymmetric concentration profile on thermal conductivity in Ge/SiGe superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, Konstanze R., E-mail: konstanze.hahn@dsf.unica.it; Cecchi, Stefano; Colombo, Luciano

    2016-05-16

    The effect of the chemical composition in Si/Ge-based superlattices on their thermal conductivity has been investigated using molecular dynamics simulations. Simulation cells of Ge/SiGe superlattices have been generated with different concentration profiles such that the Si concentration follows a step-like, a tooth-saw, a Gaussian, and a gamma-type function in direction of the heat flux. The step-like and tooth-saw profiles mimic ideally sharp interfaces, whereas Gaussian and gamma-type profiles are smooth functions imitating atomic diffusion at the interface as obtained experimentally. Symmetry effects have been investigated comparing the symmetric profiles of the step-like and the Gaussian function to the asymmetric profilesmore » of the tooth-saw and the gamma-type function. At longer sample length and similar degree of interdiffusion, the thermal conductivity is found to be lower in asymmetric profiles. Furthermore, it is found that with smooth concentration profiles where atomic diffusion at the interface takes place the thermal conductivity is higher compared to systems with atomically sharp concentration profiles.« less

  6. Realistic continuous-variable quantum teleportation with non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, F.; De Siena, S.; CNR-INFM Coherentia, Napoli, Italy, and CNISM and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi, SA

    2010-01-15

    We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resourcesmore » at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.« less

  7. Separation of components from a scale mixture of Gaussian white noises

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria

    2010-05-01

    The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.

  8. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning

    NASA Astrophysics Data System (ADS)

    Zuberi, M. AH; Pratt, R. G.

    2018-04-01

    The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.

  9. Selective excitation for spectral editing and assignment in separated local field experiments of oriented membrane proteins

    NASA Astrophysics Data System (ADS)

    Koroloff, Sophie N.; Nevzorov, Alexander A.

    2017-01-01

    Spectroscopic assignment of NMR spectra for oriented uniformly labeled membrane proteins embedded in their native-like bilayer environment is essential for their structure determination. However, sequence-specific assignment in oriented-sample (OS) NMR is often complicated by insufficient resolution and spectral crowding. Therefore, the assignment process is usually done by a laborious and expensive "shotgun" method involving multiple selective labeling of amino acid residues. Presented here is a strategy to overcome poor spectral resolution in crowded regions of 2D spectra by selecting resolved "seed" residues via soft Gaussian pulses inserted into spin-exchange separated local-field experiments. The Gaussian pulse places the selected polarization along the z-axis while dephasing the other signals before the evolution of the 1H-15N dipolar couplings. The transfer of magnetization is accomplished via mismatched Hartmann-Hahn conditions to the nearest-neighbor peaks via the proton bath. By optimizing the length and amplitude of the Gaussian pulse, one can also achieve a phase inversion of the closest peaks, thus providing an additional phase contrast. From the superposition of the selective spin-exchanged SAMPI4 onto the fully excited SAMPI4 spectrum, the 15N sites that are directly adjacent to the selectively excited residues can be easily identified, thereby providing a straightforward method for initiating the assignment process in oriented membrane proteins.

  10. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  11. On the insufficiency of arbitrarily precise covariance matrices: non-Gaussian weak-lensing likelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2018-01-01

    We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.

  12. Q (Alpha) Function and Squeezing Effect

    NASA Technical Reports Server (NTRS)

    Yunjie, Xia; Xianghe, Kong; Kezhu, Yan; Wanping, Chen

    1996-01-01

    The relation of squeezing and Q(alpha) function is discussed in this paper. By means of Q function, the squeezing of field with gaussian Q(alpha) function or negative P(a)function is also discussed in detail.

  13. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  14. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  16. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  17. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  18. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  19. Gaussian windows: A tool for exploring multivariate data

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.

  20. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  1. Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.

    PubMed

    Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko

    2010-06-28

    We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.

  2. Anisotropic non-gaussianity from rotational symmetry breaking excited initial states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashoorioon, Amjad; Casadio, Roberto; Dipartimento di Fisica e Astronomia, Alma Mater Università di Bologna,via Irnerio 46, 40126 Bologna

    2016-12-01

    If the initial quantum state of the primordial perturbations broke rotational invariance, that would be seen as a statistical anisotropy in the angular correlations of the cosmic microwave background radiation (CMBR) temperature fluctuations. This can be described by a general parameterisation of the initial conditions that takes into account the possible direction-dependence of both the amplitude and the phase of particle creation during inflation. The leading effect in the CMBR two-point function is typically a quadrupole modulation, whose coefficient is analytically constrained here to be |B|≲0.06. The CMBR three-point function then acquires enhanced non-gaussianity, especially for the local configurations. Inmore » the large occupation number limit, a distinctive prediction is a modulation of the non-gaussianity around a mean value depending on the angle that short and long wavelength modes make with the preferred direction. The maximal variations with respect to the mean value occur for the configurations which are coplanar with the preferred direction and the amplitude of the non-gaussianity increases (decreases) for the short wavelength modes aligned with (perpendicular to) the preferred direction. For a high scale model of inflation with maximally pumped up isotropic occupation and ϵ≃0.01 the difference between these two configurations is about 0.27, which could be detectable in the future. For purely anisotropic particle creation, the non-Gaussianity can be larger and its anisotropic feature very sharp. The non-gaussianity can then reach f{sub NL}∼30 in the preferred direction while disappearing from the correlations in the orthogonal plane.« less

  3. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.

  4. Cigar-shaped quarkonia under strong magnetic field

    NASA Astrophysics Data System (ADS)

    Suzuki, Kei; Yoshida, Tetsuya

    2016-03-01

    Heavy quarkonia in a homogeneous magnetic field are analyzed by using a potential model with constituent quarks. To obtain anisotropic wave functions and corresponding eigenvalues, the cylindrical Gaussian expansion method is applied, where the anisotropic wave functions are expanded by a Gaussian basis in the cylindrical coordinates. Deformation of the wave functions and the mass shifts of the S-wave heavy quarkonia (ηc, J /ψ , ηc(2 S ), ψ (2 S ) and bottomonia) are examined for the wide range of external magnetic field. The spatial structure of the wave functions changes drastically as adjacent energy levels cross each other. Possible observables in heavy-ion collision experiments and future lattice QCD simulations are also discussed.

  5. Propagation properties of hollow sinh-Gaussian beams through fractional Fourier transform optical systems

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Jiang, ShengBao; Jiang, Chun; Zhu, Haibin

    2014-07-01

    A hollow sinh-Gaussian beam (HsG) is an appropriate model to describe the dark-hollow beam. Based on Collins integral formula and the fact that a hard-edged-aperture function can be expanded into a finite sum of complex Gaussian functions, the propagation properties of a HsG beam passing through fractional Fourier transform (FRFT) optical systems with and without apertures have been studied in detail by some typical numerical examples. The results obtained using the approximate analytical formula are in good agreement with those obtained using numerical integral calculation. Further, the studies indicate that the normalized intensity distribution of the HsG beam in FRFT plane is closely related with not only the fractional order but also the beam order and the truncation parameter. The FRFT optical systems provide a convenient way for laser beam shaping.

  6. A sharp interpolation between the Hölder and Gaussian Young inequalities

    NASA Astrophysics Data System (ADS)

    da Pelo, Paolo; Lanconelli, Alberto; Stan, Aurel I.

    2016-03-01

    We prove a very general sharp inequality of the Hölder-Young-type for functions defined on infinite dimensional Gaussian spaces. We begin by considering a family of commutative products for functions which interpolates between the pointwise and Wick products; this family arises naturally in the context of stochastic differential equations, through Wong-Zakai-type approximation theorems, and plays a key role in some generalizations of the Beckner-type Poincaré inequality. We then obtain a crucial integral representation for that family of products which is employed, together with a generalization of the classic Young inequality due to Lieb, to prove our main theorem. We stress that our main inequality contains as particular cases the Hölder inequality and Nelson’s hyper-contractive estimate, thus providing a unified framework for two fundamental results of the Gaussian analysis.

  7. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  8. Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.

    PubMed

    Morsnowski, André; Maune, Steffen

    2016-10-01

    Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.

  9. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  10. On the contribution of G20 and G30 in the Time-Averaged Paleomagnetic Field: First results from a new Giant Gaussian Process inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Khokhlov, A.; Hulot, G.; Johnson, C. L.

    2013-12-01

    It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.

  11. Double Wigner distribution function of a first-order optical system with a hard-edge aperture.

    PubMed

    Pan, Weiqing

    2008-01-01

    The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.

  12. Comparative Analysis of Membership Function on Mamdani Fuzzy Inference System for Decision Making

    NASA Astrophysics Data System (ADS)

    harliana, Putri; Rahim, Robbi

    2017-12-01

    Membership function is a curve that shows mapping the input data points into the value or degree of membership which has an interval between 0 and 1. One way to get membership value is through a function approach. There are some membership functions can be used on mamdani fuzzy inference system. They are triangular, trapezoid, singleton, sigmoid, Gaussian, etc. In this paper only discuss three membership functions, are triangular, trapezoid and Gaussian. These three membership functions will be compared to see the difference in parameter values and results obtained. For case study in this paper is admission of students at popular school. There are three variable can be used, they are students’ report, IQ score and parents’ income. Which will then be created if-then rules.

  13. Apertured averaged scintillation of fully and partially coherent Gaussian, annular Gaussian, flat toped and dark hollow beams

    NASA Astrophysics Data System (ADS)

    Eyyuboğlu, Halil T.

    2015-03-01

    Apertured averaged scintillation requires the evaluation of rather complicated irradiance covariance function. Here we develop a much simpler numerical method based on our earlier introduced semi-analytic approach. Using this method, we calculate aperture averaged scintillation of fully and partially coherent Gaussian, annular Gaussian flat topped and dark hollow beams. For comparison, the principles of equal source beam power and normalizing the aperture averaged scintillation with respect to received power are applied. Our results indicate that for fully coherent beams, upon adjusting the aperture sizes to capture 10 and 20% of the equal source power, Gaussian beam needs the largest aperture opening, yielding the lowest aperture average scintillation, whilst the opposite occurs for annular Gaussian and dark hollow beams. When assessed on the basis of received power normalized aperture averaged scintillation, fixed propagation distance and aperture size, annular Gaussian and dark hollow beams seem to have the lowest scintillation. Just like the case of point-like scintillation, partially coherent beams will offer less aperture averaged scintillation in comparison to fully coherent beams. But this performance improvement relies on larger aperture openings. Upon normalizing the aperture averaged scintillation with respect to received power, fully coherent beams become more advantageous than partially coherent ones.

  14. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  15. Hybrid [¹⁸F]-FDG PET/MRI including non-Gaussian diffusion-weighted imaging (DWI): preliminary results in non-small cell lung cancer (NSCLC).

    PubMed

    Heusch, Philipp; Köhler, Jens; Wittsack, Hans-Joerg; Heusner, Till A; Buchbender, Christian; Poeppel, Thorsten D; Nensa, Felix; Wetter, Axel; Gauler, Thomas; Hartung, Verena; Lanzman, Rotem S

    2013-11-01

    To assess the feasibility of non-Gaussian DWI as part of a FDG-PET/MRI protocol in patients with histologically proven non-small cell lung cancer. 15 consecutive patients with histologically proven NSCLC (mean age 61 ± 11 years) were included in this study and underwent whole-body FDG-PET/MRI following whole-body FDG-PET/CT. As part of the whole-body FDG-PET/MRI protocol, an EPI-sequence with 5 b-values (0, 100, 500, 1000 and 2000 s/mm(2)) was acquired for DWI of the thorax during free-breathing. Volume of interest (VOI) measurements were performed to determine the maximum and mean standardized uptake value (SUV(max); SUV(mean)). A region of interest (ROI) was manually drawn around the tumor on b=0 images and then transferred to the corresponding parameter maps to assess ADC(mono), D(app) and K(app). To assess the goodness of the mathematical fit R(2) was calculated for monoexponential and non-Gaussian analysis. Spearman's correlation coefficients were calculated to compare SUV values and diffusion coefficients. A Student's t-test was performed to compare the monoexponential and non-Gaussian diffusion fitting (R(2)). T staging was equal between FDG-PET/CT and FDG-PET/MRI in 12 of 15 patients. For NSCLC, mean ADC(mono) was 2.11 ± 1.24 × 10(-3) mm(2)/s, Dapp was 2.46 ± 1.29 × 10(-3) mm(2)/s and mean Kapp was 0.70 ± 0.21. The non-Gaussian diffusion analysis (R(2)=0.98) provided a significantly better mathematical fitting to the DWI signal decay than the monoexponetial analysis (R(2)=0.96) (p<0.001). SUV(max) and SUV(mean) of NSCLC was 13.5 ± 7.6 and 7.9 ± 4.3 for FDG-PET/MRI. ADC(mono) as well as Dapp exhibited a significant inverse correlation with the SUV(max) (ADC(mono): R=-0.67; p<0.01; Dapp: R=-0.69; p<0.01) as well as with SUV(mean) assessed by FDG-PET/MRI (ADC(mono): R=-0.66; p<0.01; Dapp: R=-0.69; p<0.01). Furthermore, Kapp exhibited a significant correlation with SUV(max) (R=0.72; p<0.05) and SUV(mean) as assessed by FDG-PET/MRI (R=0.71; p<0.005). Simultaneous PET and non-Gaussian diffusion acquisitions are feasible. Non-Gaussian diffusion parameters show a good correlation with SUV and might provide additional information beyond monoexponential ADC, especially as non-Gaussian diffusion exhibits better mathematical fitting to the decay of the diffusion signal than monoexponential DWI. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. The exact eigenfunctions and eigenvalues of a two-dimensional rigid rotor obtained using Gaussian wave packet dynamics

    NASA Technical Reports Server (NTRS)

    Reimers, J. R.; Heller, E. J.

    1985-01-01

    Exact eigenfunctions for a two-dimensional rigid rotor are obtained using Gaussian wave packet dynamics. The wave functions are obtained by propagating, without approximation, an infinite set of Gaussian wave packets that collectively have the correct periodicity, being coherent states appropriate to this rotational problem. This result leads to a numerical method for the semiclassical calculation of rovibrational, molecular eigenstates. Also, a simple, almost classical, approximation to full wave packet dynamics is shown to give exact results: this leads to an a posteriori justification of the De Leon-Heller spectral quantization method.

  17. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  18. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    NASA Astrophysics Data System (ADS)

    Tsilifis, Panagiotis; Ghanem, Roger G.

    2017-07-01

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  19. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook

    2015-03-07

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less

  20. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  1. Ince Gaussian beams in strongly nonlocal nonlinear media

    NASA Astrophysics Data System (ADS)

    Deng, Dongmei; Guo, Qi

    2008-07-01

    Based on the Snyder-Mitchell model that describes the beam propagation in strongly nonlocal nonlinear media, the close forms of Ince-Gaussian (IG) beams have been found. The transverse structures of the IG beams are described by the product of the Ince polynomials and the Gaussian function. Depending on the input power of the beams, the IG beams can be either a soliton state or a breather state. The IG beams constitute the exact and continuous transition modes between Hermite-Gaussian beams and Laguerre-Gaussian beams. The IG vortex beams can be constructed by a linear combination of the even and odd IG beams. The transverse intensity pattern of IG vortex beams consists of elliptic rings, whose number and ellipticity can be controlled, and a phase displaying a number of in-line vortices, each with a unitary topological charge. The analytical solutions of the IG beams are confirmed by the numerical simulations of the nonlocal nonlinear Schr\\rm \\ddot{o} dinger equation.

  2. Sufficient condition for a quantum state to be genuinely quantum non-Gaussian

    NASA Astrophysics Data System (ADS)

    Happ, L.; Efremov, M. A.; Nha, H.; Schleich, W. P.

    2018-02-01

    We show that the expectation value of the operator \\hat{{ \\mathcal O }}\\equiv \\exp (-c{\\hat{x}}2)+\\exp (-c{\\hat{p}}2) defined by the position and momentum operators \\hat{x} and \\hat{p} with a positive parameter c can serve as a tool to identify quantum non-Gaussian states, that is states that cannot be represented as a mixture of Gaussian states. Our condition can be readily tested employing a highly efficient homodyne detection which unlike quantum-state tomography requires the measurements of only two orthogonal quadratures. We demonstrate that our method is even able to detect quantum non-Gaussian states with positive–definite Wigner functions. This situation cannot be addressed in terms of the negativity of the phase-space distribution. Moreover, we demonstrate that our condition can characterize quantum non-Gaussianity for the class of superposition states consisting of a vacuum and integer multiples of four photons under more than 50 % signal attenuation.

  3. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  4. AUTONOMOUS GAUSSIAN DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less

  5. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  6. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  7. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  8. Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior

    NASA Astrophysics Data System (ADS)

    Dumitru, Mircea; Djafari, Ali-Mohammad

    2015-01-01

    The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.

  9. Inverse models of plate coupling and mantle rheology: Towards a direct link between large-scale mantle flow and mega thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Ratnaswamy, V.; Stadler, G.; Rudi, J.; Liu, X.; Ghattas, O.

    2017-12-01

    We are developing high-resolution inverse models for plate motions and mantle flow to recover the degree of mechanical coupling between plates and the non-linear and plastic parameters governing viscous flow within the lithosphere and mantle. We have developed adjoint versions of the Stokes equations with fully non-linear viscosity with a cost function that measures the fit with plate motions and with regional constrains on effective upper mantle viscosity (from post-glacial rebound and post seismic relaxation). In our earlier work, we demonstrate that when the temperature field is known, the strength of plate boundaries, the yield stress and strain rate exponent in the upper mantle are recoverable. As the plate boundary coupling drops below a threshold, the uncertainty of the inferred parameters increases due to insensitivity of plate motion to plate coupling. Comparing the trade-offs between inferred rheological parameters found from a Gaussian approximation of the parameter distribution and from MCMC sampling, we found that the Gaussian approximation—which is significantly cheaper to compute—is often a good approximation. We have extended our earlier method such that we can recover normal and shear stresses within the zones determining the interface between subducting and over-riding plates determined through seismic constraints (using the Slab1.0 model). We find that those subduction zones with low seismic coupling correspond with low inferred values of mechanical coupling. By fitting plate motion data in the optimization scheme, we find that Tonga and the Marianas have the lowest values of mechanical coupling while Chile and Sumatra the highest, among the subduction zones we have studies. Moreover, because of the nature of the high-resolution adjoint models, the subduction zones with the lowest coupling have back-arc extension. Globally we find that the non-linear stress-strain exponent, n, is about 3.0 +/- 0.25 (in the upper mantle and lithosphere) and a pressure-independent yield stress is 150 +/- 25 MPa. The stress in the shear zones is just tens of MPa, and in preliminary models, we find that both the shear and the normal stresses are elevated in the coupled compared to the uncoupled subduction zones.

  10. Parts-based geophysical inversion with application to water flooding interface detection and geological facies detection

    NASA Astrophysics Data System (ADS)

    Zhang, Junwei

    I built parts-based and manifold based mathematical learning model for the geophysical inverse problem and I applied this approach to two problems. One is related to the detection of the oil-water encroachment front during the water flooding of an oil reservoir. In this application, I propose a new 4D inversion approach based on the Gauss-Newton approach to invert time-lapse cross-well resistance data. The goal of this study is to image the position of the oil-water encroachment front in a heterogeneous clayey sand reservoir. This approach is based on explicitly connecting the change of resistivity to the petrophysical properties controlling the position of the front (porosity and permeability) and to the saturation of the water phase through a petrophysical resistivity model accounting for bulk and surface conductivity contributions and saturation. The distributions of the permeability and porosity are also inverted using the time-lapse resistivity data in order to better reconstruct the position of the oil water encroachment front. In our synthetic test case, we get a better position of the front with the by-products of porosity and permeability inferences near the flow trajectory and close to the wells. The numerical simulations show that the position of the front is recovered well but the distribution of the recovered porosity and permeability is only fair. A comparison with a commercial code based on a classical Gauss-Newton approach with no information provided by the two-phase flow model fails to recover the position of the front. The new approach could be also used for the time-lapse monitoring of various processes in both geothermal fields and oil and gas reservoirs using a combination of geophysical methods. A paper has been published in Geophysical Journal International on this topic and I am the first author of this paper. The second application is related to the detection of geological facies boundaries and their deforation to satisfy to geophysica data and prior distributions. We pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case study, performing a joint inversion of gravity and galvanometric resistivity data with the stations all located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to deform the facies boundaries preserving prior topological properties of the facies throughout the inversion. With the additional help of prior facies petrophysical relationships, topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The result of the inversion technique is encouraging when applied to a second synthetic case study, showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries. A paper has been submitted to Geophysics on this topic and I am the first author of this paper. During this thesis, I also worked on the time lapse inversion problem of gravity data in collaboration with Marios Karaoulis and a paper was published in Geophysical Journal international on this topic. I also worked on the time-lapse inversion of cross-well geophysical data (seismic and resistivity) using both a structural approach named the cross-gradient approach and a petrophysical approach. A paper was published in Geophysics on this topic.

  11. Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.

    2018-07-01

    Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.

  12. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  13. A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes

    NASA Astrophysics Data System (ADS)

    Xu, Yafei; Hong, Sheng; Zhao, Guodong; Zhang, Fengyuan; di, Jinshan; Zhang, Qishan

    This paper proposes a direct-sequence UWB Gaussian pulse of cognitive radio systems based on bridge function smart sequence matrix and the Gaussian pulse. As the system uses the spreading sequence code, that is the bridge function smart code sequence, the zero correlation zones (ZCZs) which the bridge function sequences' auto-correlation functions had, could reduce multipath fading of the pulse interference. The Modulated channel signal was sent into the IEEE 802.15.3a UWB channel. We analysis the ZCZs's inhibition to the interference multipath interference (MPI), as one of the main system sources interferences. The simulation in SIMULINK/MATLAB is described in detail. The result shows the system has better performance by comparison with that employing Walsh sequence square matrix, and it was verified by the formula in principle.

  14. Estimating Mixture of Gaussian Processes by Kernel Smoothing

    PubMed Central

    Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin

    2014-01-01

    When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675

  15. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  16. Propagation effects in the generation process of high-order vortex harmonics.

    PubMed

    Zhang, Chaojin; Wu, Erheng; Gu, Mingliang; Liu, Chengpu

    2017-09-04

    We numerically study the propagation of a Laguerre-Gaussian beam through polar molecular media via the exact solution of full-wave Maxwell-Bloch equations where the rotating-wave and slowly-varying-envelope approximations are not included. It is found that beyond the coexistence of odd-order and even-order vortex harmonics due to inversion asymmetry of the system, the light propagation effect results in the intensity enhancement of a high-order vortex harmonics. Moreover, the orbital momentum successfully transfers from the fundamental laser driver to the vortex harmonics which topological charger number is directly proportional to its order.

  17. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  18. Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images.

    PubMed

    Uddin, Muhammad Shahin; Halder, Kalyan Kumar; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-11-01

    Ultrasound (US) imaging is a widely used clinical diagnostic tool in medical imaging techniques. It is a comparatively safe, economical, painless, portable, and noninvasive real-time tool compared to the other imaging modalities. However, the image quality of US imaging is severely affected by the presence of speckle noise and blur during the acquisition process. In order to ensure a high-quality clinical diagnosis, US images must be restored by reducing their speckle noise and blur. In general, speckle noise is modeled as a multiplicative noise following a Rayleigh distribution and blur as a Gaussian function. Hereto, we propose an intelligent estimator based on artificial neural networks (ANNs) to estimate the variances of noise and blur, which, in turn, are used to obtain an image without discernible distortions. A set of statistical features computed from the image and its complex wavelet sub-bands are used as input to the ANN. In the proposed method, we solve the inverse Rayleigh function numerically for speckle reduction and use the Richardson-Lucy algorithm for de-blurring. The performance of this method is compared with that of the traditional methods by applying them to a synthetic, physical phantom and clinical data, which confirms better restoration results by the proposed method.

  19. Scilab software as an alternative low-cost computing in solving the linear equations problem

    NASA Astrophysics Data System (ADS)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  20. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  1. The density compression ratio of shock fronts associated with coronal mass ejections

    NASA Astrophysics Data System (ADS)

    Kwon, Ryun-Young; Vourlidas, Angelos

    2018-02-01

    We present a new method to extract the three-dimensional electron density profile and density compression ratio of shock fronts associated with coronal mass ejections (CMEs) observed in white light coronagraph images. We demonstrate the method with two examples of fast halo CMEs (˜2000 km s-1) observed on 2011 March 7 and 2014 February 25. Our method uses the ellipsoid model to derive the three-dimensional geometry and kinematics of the fronts. The density profiles of the sheaths are modeled with double-Gaussian functions with four free parameters, and the electrons are distributed within thin shells behind the front. The modeled densities are integrated along the lines of sight to be compared with the observed brightness in COR2-A, and a χ2 approach is used to obtain the optimal parameters for the Gaussian profiles. The upstream densities are obtained from both the inversion of the brightness in a pre-event image and an empirical model. Then the density ratio and Alfvénic Mach number are derived. We find that the density compression peaks around the CME nose, and decreases at larger position angles. The behavior is consistent with a driven shock at the nose and a freely propagating shock wave at the CME flanks. Interestingly, we find that the supercritical region extends over a large area of the shock and lasts longer (several tens of minutes) than past reports. It follows that CME shocks are capable of accelerating energetic particles in the corona over extended spatial and temporal scales and are likely responsible for the wide longitudinal distribution of these particles in the inner heliosphere. Our results also demonstrate the power of multi-viewpoint coronagraphic observations and forward modeling in remotely deriving key shock properties in an otherwise inaccessible regime.

  2. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/ hyperactivity disorder.

    PubMed

    Wolfers, Thomas; Onnink, A Marten H; Zwiers, Marcel P; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C; Kan, Cornelis C; Slaats-Willemse, Dorine; Buitelaar, Jan K; Franke, Barbara

    2015-09-01

    Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging-derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD.

  3. Modeling of high‐frequency seismic‐wave scattering and propagation using radiative transfer theory

    USGS Publications Warehouse

    Zeng, Yuehua

    2017-01-01

    This is a study of the nonisotropic scattering process based on radiative transfer theory and its application to the observation of the M 4.3 aftershock recording of the 2008 Wells earthquake sequence in Nevada. Given a wide range of recording distances from 29 to 320 km, the data provide a unique opportunity to discriminate scattering models based on their distance‐dependent behaviors. First, we develop a stable numerical procedure to simulate nonisotropic scattering waves based on the 3D nonisotropic scattering theory proposed by Sato (1995). By applying the simulation method to the inversion of M 4.3 Wells aftershock recordings, we find that a nonisotropic scattering model, dominated by forward scattering, provides the best fit to the observed high‐frequency direct S waves and S‐wave coda velocity envelopes. The scattering process is governed by a Gaussian autocorrelation function, suggesting a Gaussian random heterogeneous structure for the Nevada crust. The model successfully explains the common decay of seismic coda independent of source–station locations as a result of energy leaking from multiple strong forward scattering, instead of backscattering governed by the diffusion solution at large lapse times. The model also explains the pulse‐broadening effect in the high‐frequency direct and early arriving S waves, as other studies have found, and could be very important to applications of high‐frequency wave simulation in which scattering has a strong effect. We also find that regardless of its physical implications, the isotropic scattering model provides the same effective scattering coefficient and intrinsic attenuation estimates as the forward scattering model, suggesting that the isotropic scattering model is still a viable tool for the study of seismic scattering and intrinsic attenuation coefficients in the Earth.

  4. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/hyperactivity disorder

    PubMed Central

    Wolfers, Thomas; Onnink, A. Marten H.; Zwiers, Marcel P.; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C.; Kan, Cornelis C.; Slaats-Willemse, Dorine; Buitelaar, Jan K.; Franke, Barbara

    2015-01-01

    Background Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. Methods We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging–derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. Results We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Limitations Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. Conclusion This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD. PMID:26079698

  5. The Strategy for Time Dependent Quantum Mechanical Calculations Using a Gaussian Wave Packet Representation of the Wave Function.

    DTIC Science & Technology

    1985-01-01

    a number of problems chosen so that the risk of SHM break-down wa.s minimized. A beautiful example is the absorption coefficient of a...the aporo~ cimation We consider here the case of one normalized Gaussian, to isolate the effects of LilA from those of the neglect of the *Interaction

  6. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  7. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  8. Assessment of a Three-Dimensional Line-of-Response Probability Density Function System Matrix for PET

    PubMed Central

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-01-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702

  9. Wavelength interrogation of fiber Bragg grating sensors based on crossed optical Gaussian filters.

    PubMed

    Cheng, Rui; Xia, Li; Zhou, Jiaao; Liu, Deming

    2015-04-15

    Conventional intensity-modulated measurements require to be operated in linear range of filter or interferometric response to ensure a linear detection. Here, we present a wavelength interrogation system for fiber Bragg grating sensors where the linear transition is achieved with crossed Gaussian transmissions. This unique filtering characteristic makes the responses of the two branch detections follow Gaussian functions with the same parameters except for a delay. The substraction of these two delayed Gaussian responses (in dB) ultimately leads to a linear behavior, which is exploited for the sensor wavelength determination. Beside its flexibility and inherently power insensitivity, the proposal also shows a potential of a much wider operational range. Interrogation of a strain-tuned grating was accomplished, with a wide sensitivity tuning range from 2.56 to 8.7 dB/nm achieved.

  10. Effect of exponential density transition on self-focusing of q-Gaussian laser beam in collisionless plasma

    NASA Astrophysics Data System (ADS)

    Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.

    2018-05-01

    In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.

  11. The effect of halo nuclear density on reaction cross-section for light ion collision

    NASA Astrophysics Data System (ADS)

    Hassan, M. A. M.; Nour El-Din, M. S. M.; Ellithi, A.; Ismail, E.; Hosny, H.

    2015-08-01

    In the framework of the optical limit approximation (OLA), the reaction cross-section for halo nucleus — stable nucleus collision at intermediate energy, has been studied. The projectile nuclei are taken to be one-neutron halo (1NHP) and two-neutron halo (2NHP). The calculations are carried out for Gaussian-Gaussian (GG), Gaussian-Oscillator (GO), and Gaussian-2S (G2S) densities for each considered projectile. As a target, the stable nuclei in the range 4-28 of the mass number are used. An analytic expression of the phase shift function has been derived. The zero range approximation is considered in the calculations. Also, the in-medium effect is studied. The obtained results are analyzed and compared with the geometrical reaction cross-section and the available experimental data.

  12. Tensor non-Gaussianity from axion-gauge-fields dynamics: parameter search

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Fujita, Tomohiro; Komatsu, Eiichiro

    2018-06-01

    We calculate the bispectrum of scale-invariant tensor modes sourced by spectator SU(2) gauge fields during inflation in a model containing a scalar inflaton, a pseudoscalar axion and SU(2) gauge fields. A large bispectrum is generated in this model at tree-level as the gauge fields contain a tensor degree of freedom, and its production is dominated by self-coupling of the gauge fields. This is a unique feature of non-Abelian gauge theory. The shape of the tensor bispectrum is approximately an equilateral shape for 3lesssim mQlesssim 4, where mQ is an effective dimensionless mass of the SU(2) field normalised by the Hubble expansion rate during inflation. The amplitude of non-Gaussianity of the tensor modes, characterised by the ratio Bh/P2h, is inversely proportional to the energy density fraction of the gauge field. This ratio can be much greater than unity, whereas the ratio from the vacuum fluctuation of the metric is of order unity. The bispectrum is effective at constraining large mQ regions of the parameter space, whereas the power spectrum constrains small mQ regions.

  13. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  14. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  15. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L =1

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-01

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L =1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  17. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L=1.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-21

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L=1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  18. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  19. Crossing statistics of laser light scattered through a nanofluid.

    PubMed

    Arshadi Pirlar, M; Movahed, S M S; Razzaghi, D; Karimzadeh, R

    2017-09-01

    In this paper, we investigate the crossing statistics of speckle patterns formed in the Fresnel diffraction region by a laser beam scattering through a nanofluid. We extend zero-crossing statistics to assess the dynamical properties of the nanofluid. According to the joint probability density function of laser beam fluctuation and its time derivative, the theoretical frameworks for Gaussian and non-Gaussian regimes are revisited. We count the number of crossings not only at zero level but also for all available thresholds to determine the average speed of moving particles. Using a probabilistic framework in determining crossing statistics, a priori Gaussianity is not essentially considered; therefore, even in the presence of deviation from Gaussian fluctuation, this modified approach is capable of computing relevant quantities, such as mean value of speed, more precisely. Generalized total crossing, which represents the weighted summation of crossings for all thresholds to quantify small deviation from Gaussian statistics, is introduced. This criterion can also manipulate the contribution of noises and trends to infer reliable physical quantities. The characteristic time scale for having successive crossings at a given threshold is defined. In our experimental setup, we find that increasing sample temperature leads to more consistency between Gaussian and perturbative non-Gaussian predictions. The maximum number of crossings does not necessarily occur at mean level, indicating that we should take into account other levels in addition to zero level to achieve more accurate assessments.

  20. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  1. Invariant polarimetric contrast parameters of light with Gaussian fluctuations in three dimensions.

    PubMed

    Réfrégier, Philippe; Roche, Muriel; Goudail, François

    2006-01-01

    We propose a rigorous definition of the minimal set of parameters that characterize the difference between two partially polarized states of light whose electric fields vary in three dimensions with Gaussian fluctuations. Although two such states are a priori defined by eighteen parameters, we demonstrate that the performance of processing tasks such as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by three scalar functions of these parameters. These functions define a "polarimetric contrast" that simplifies the analysis and the specification of processing techniques on polarimetric signals and images. This result can also be used to analyze the definition of the degree of polarization of a three-dimensional state of light with Gaussian fluctuations in comparison, with respect to its polarimetric contrast parameters, with a totally depolarized light. We show that these contrast parameters are a simple function of the degrees of polarization previously proposed by Barakat [Opt. Acta 30, 1171 (1983)] and Setälä et al. [Phys. Rev. Lett. 88, 123902 (2002)]. Finally, we analyze the dimension of the set of contrast parameters in different particular situations.

  2. Visualizing bacterial tRNA identity determinants and antideterminants using function logos and inverse function logos

    PubMed Central

    Freyhult, Eva; Moulton, Vincent; Ardell, David H.

    2006-01-01

    Sequence logos are stacked bar graphs that generalize the notion of consensus sequence. They employ entropy statistics very effectively to display variation in a structural alignment of sequences of a common function, while emphasizing its over-represented features. Yet sequence logos cannot display features that distinguish functional subclasses within a structurally related superfamily nor do they display under-represented features. We introduce two extensions to address these needs: function logos and inverse logos. Function logos display subfunctions that are over-represented among sequences carrying a specific feature. Inverse logos generalize both sequence logos and function logos by displaying under-represented, rather than over-represented, features or functions in structural alignments. To make inverse logos, a compositional inverse is applied to the feature or function frequency distributions before logo construction, where a compositional inverse is a mathematical transform that makes common features or functions rare and vice versa. We applied these methods to a database of structurally aligned bacterial tDNAs to create highly condensed, birds-eye views of potentially all so-called identity determinants and antideterminants that confer specific amino acid charging or initiator function on tRNAs in bacteria. We recovered both known and a few potentially novel identity elements. Function logos and inverse logos are useful tools for exploratory bioinformatic analysis of structure–function relationships in sequence families and superfamilies. PMID:16473848

  3. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    NASA Astrophysics Data System (ADS)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  4. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  5. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  6. Propagation-invariant beams with quantum pendulum spectra: from Bessel beams to Gaussian beam-beams.

    PubMed

    Dennis, Mark R; Ring, James D

    2013-09-01

    We describe a new class of propagation-invariant light beams with Fourier transform given by an eigenfunction of the quantum mechanical pendulum. These beams, whose spectra (restricted to a circle) are doubly periodic Mathieu functions in azimuth, depend on a field strength parameter. When the parameter is zero, pendulum beams are Bessel beams, and as the parameter approaches infinity, they resemble transversely propagating one-dimensional Gaussian wave packets (Gaussian beam-beams). Pendulum beams are the eigenfunctions of an operator that interpolates between the squared angular momentum operator and the linear momentum operator. The analysis reveals connections with Mathieu beams, and insight into the paraxial approximation.

  7. Probing the statistics of primordial fluctuations and their evolution

    NASA Technical Reports Server (NTRS)

    Gaztanaga, Enrique; Yokoyama, Jun'ichi

    1993-01-01

    The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.

  8. Rational-operator-based depth-from-defocus approach to scene reconstruction.

    PubMed

    Li, Ang; Staunton, Richard; Tjahjadi, Tardi

    2013-09-01

    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

  9. Time-Harmonic Gaussian Beams: Exact Solutions of the Helmhotz Equation in Free Space

    NASA Astrophysics Data System (ADS)

    Kiselev, A. P.

    2017-12-01

    An exact solution of the Helmholtz equation u xx + u yy + u zz + k 2 u = 0 is presented, which describes propagation of monochromatic waves in the free space. The solution has the form of a superposition of plane waves with a specific weight function dependent on a certain free parameter a. If ka→∞, the solution is localized in the Gaussian manner in a vicinity of a certain straight line and asymptotically coincides with the famous approximate solution known as the fundamental mode of a paraxial Gaussian beam. The asymptotics of the aforementioned exact solution does not include a backward wave.

  10. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  11. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  12. Identification of high-permeability subsurface structures with multiple point geostatistics and normal score ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo

    2017-05-01

    Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much larger number of conditioning piezometric heads, multiple point geostatistics and ERT could improve aquifer characterization. This shows that state of the art stochastic methods need to be supported by abundant and high-quality subsurface data.

  13. Magnetism in all-carbon nanostructures with negative Gaussian curvature.

    PubMed

    Park, Noejung; Yoon, Mina; Berber, Savas; Ihm, Jisoon; Osawa, Eiji; Tománek, David

    2003-12-05

    We apply the ab initio spin density functional theory to study magnetism in all-carbon nanostructures. We find that particular systems, which are related to schwarzite and contain no undercoordinated carbon atoms, carry a net magnetic moment in the ground state. We postulate that, in this and other nonalternant aromatic systems with negative Gaussian curvature, unpaired spins can be introduced by sterically protected carbon radicals.

  14. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  15. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  16. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  17. Receiver function HV ratio: a new measurement for reducing non-uniqueness of receiver function waveform inversion

    NASA Astrophysics Data System (ADS)

    Chong, Jiajun; Chu, Risheng; Ni, Sidao; Meng, Qingjun; Guo, Aizhi

    2018-02-01

    It is known that a receiver function has relatively weak constraint on absolute seismic wave velocity, and that joint inversion of the receiver function with surface wave dispersion has been widely applied to reduce the trade-off of velocity with interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear-wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the receiver function to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear-wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion.

  18. Droxidopa and Reduced Falls in a Trial of Parkinson Disease Patients With Neurogenic Orthostatic Hypotension.

    PubMed

    Hauser, Robert A; Heritier, Stephane; Rowse, Gerald J; Hewitt, L Arthur; Isaacson, Stuart H

    2016-01-01

    Droxidopa is a prodrug of norepinephrine indicated for the treatment of orthostatic dizziness, lightheadedness, or the "feeling that you are about to black out" in adult patients with symptomatic neurogenic orthostatic hypotension caused by primary autonomic failure including Parkinson disease (PD). The objective of this study was to compare fall rates in PD patients with symptomatic neurogenic orthostatic hypotension randomized to droxidopa or placebo. Study NOH306 was a 10-week, phase 3, randomized, placebo-controlled, double-blind trial of droxidopa in PD patients with symptomatic neurogenic orthostatic hypotension that included assessments of falls as a key secondary end point. In this report, the principal analysis consisted of a comparison of the rate of patient-reported falls from randomization to end of study in droxidopa versus placebo groups. A total of 225 patients were randomized; 222 patients were included in the safety analyses, and 197 patients provided efficacy data and were included in the falls analyses. The 92 droxidopa patients reported 308 falls, and the 105 placebo patients reported 908 falls. In the droxidopa group, the fall rate was 0.4 falls per patient-week; in the placebo group, the rate was 1.05 falls per patient-week (prespecified Wilcoxon rank sum P = 0.704; post hoc Poisson-inverse Gaussian test P = 0.014), yielding a relative risk reduction of 77% using the Poisson-inverse Gaussian model. Fall-related injuries occurred in 16.7% of droxidopa-treated patients and 26.9% of placebo-treated patients. Treatment with droxidopa appears to reduce falls in PD patients with symptomatic neurogenic orthostatic hypotension, but this finding must be confirmed.

  19. Droxidopa and Reduced Falls in a Trial of Parkinson Disease Patients With Neurogenic Orthostatic Hypotension

    PubMed Central

    Hauser, Robert A.; Heritier, Stephane; Rowse, Gerald J.; Hewitt, L. Arthur; Isaacson, Stuart H.

    2016-01-01

    Objectives Droxidopa is a prodrug of norepinephrine indicated for the treatment of orthostatic dizziness, lightheadedness, or the “feeling that you are about to black out” in adult patients with symptomatic neurogenic orthostatic hypotension caused by primary autonomic failure including Parkinson disease (PD). The objective of this study was to compare fall rates in PD patients with symptomatic neurogenic orthostatic hypotension randomized to droxidopa or placebo. Methods Study NOH306 was a 10-week, phase 3, randomized, placebo-controlled, double-blind trial of droxidopa in PD patients with symptomatic neurogenic orthostatic hypotension that included assessments of falls as a key secondary end point. In this report, the principal analysis consisted of a comparison of the rate of patient-reported falls from randomization to end of study in droxidopa versus placebo groups. Results A total of 225 patients were randomized; 222 patients were included in the safety analyses, and 197 patients provided efficacy data and were included in the falls analyses. The 92 droxidopa patients reported 308 falls, and the 105 placebo patients reported 908 falls. In the droxidopa group, the fall rate was 0.4 falls per patient-week; in the placebo group, the rate was 1.05 falls per patient-week (prespecified Wilcoxon rank sum P = 0.704; post hoc Poisson-inverse Gaussian test P = 0.014), yielding a relative risk reduction of 77% using the Poisson-inverse Gaussian model. Fall-related injuries occurred in 16.7% of droxidopa-treated patients and 26.9% of placebo-treated patients. Conclusions Treatment with droxidopa appears to reduce falls in PD patients with symptomatic neurogenic orthostatic hypotension, but this finding must be confirmed. PMID:27332626

  20. A Model-Based Evaluation of the Inverse Gaussian Transit-Time Distribution Method for Inferring Anthropogenic Carbon Storage in the Ocean

    NASA Astrophysics Data System (ADS)

    He, Yan-Chun; Tjiputra, Jerry; Langehaug, Helene R.; Jeansson, Emil; Gao, Yongqi; Schwinger, Jörg; Olsen, Are

    2018-03-01

    The Inverse Gaussian approximation of transit time distribution method (IG-TTD) is widely used to infer the anthropogenic carbon (Cant) concentration in the ocean from measurements of transient tracers such as chlorofluorocarbons (CFCs) and sulfur hexafluoride (SF6). Its accuracy relies on the validity of several assumptions, notably (i) a steady state ocean circulation, (ii) a prescribed age tracer saturation history, e.g., a constant 100% saturation, (iii) a prescribed constant degree of mixing in the ocean, (iv) a constant surface ocean air-sea CO2 disequilibrium with time, and (v) that preformed alkalinity can be sufficiently estimated by salinity or salinity and temperature. Here, these assumptions are evaluated using simulated "model-truth" of Cant. The results give the IG-TTD method a range of uncertainty from 7.8% to 13.6% (11.4 Pg C to 19.8 Pg C) due to above assumptions, which is about half of the uncertainty derived in previous model studies. Assumptions (ii), (iv) and (iii) are the three largest sources of uncertainties, accounting for 5.5%, 3.8% and 3.0%, respectively, while assumptions (i) and (v) only contribute about 0.6% and 0.7%. Regionally, the Southern Ocean contributes the largest uncertainty, of 7.8%, while the North Atlantic contributes about 1.3%. Our findings demonstrate that spatial-dependency of Δ/Γ, and temporal changes in tracer saturation and air-sea CO2 disequilibrium have strong compensating effect on the estimated Cant. The values of these parameters should be quantified to reduce the uncertainty of IG-TTD; this is increasingly important under a changing ocean climate.

Top