Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.
1987-09-18
breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.
INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE
INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...
Full-wave generalizations of the fundamental Gaussian beam.
Seshadri, S R
2009-12-01
The basic full wave corresponding to the fundamental Gaussian beam was discovered for the outwardly propagating wave in a half-space by the introduction of a source in the complex space. There is a class of extended full waves all of which reduce to the same fundamental Gaussian beam in the appropriate limit. For the extended full Gaussian waves that include the basic full Gaussian wave as a special case, the sources are in the complex space on different planes transverse to the propagation direction. The sources are cylindrically symmetric Gaussian distributions centered at the origin of the transverse planes, the axis of symmetry being the propagation direction. For the special case of the basic full Gaussian wave, the source is a point source. The radiation intensity of the extended full Gaussian waves is determined and their characteristics are discussed and compared with those of the fundamental Gaussian beam. The extended full Gaussian waves are also obtained for the oppositely propagating outwardly directed waves in the second half-space. The radiation intensity distributions in the two half-spaces have reflection symmetry about the midplane. The radiation intensity distributions of the various extended full Gaussian waves are not significantly different. The power carried by the extended full Gaussian waves is evaluated and compared with that of the fundamental Gaussian beam.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...
Launching and controlling Gaussian beams from point sources via planar transformation media
NASA Astrophysics Data System (ADS)
Odabasi, Hayrettin; Sainath, Kamalesh; Teixeira, Fernando L.
2018-02-01
Based on operations prescribed under the paradigm of complex transformation optics (CTO) [F. Teixeira and W. Chew, J. Electromagn. Waves Appl. 13, 665 (1999), 10.1163/156939399X01104; F. L. Teixeira and W. C. Chew, Int. J. Numer. Model. 13, 441 (2000), 10.1002/1099-1204(200009/10)13:5%3C441::AID-JNM376%3E3.0.CO;2-J; H. Odabasi, F. L. Teixeira, and W. C. Chew, J. Opt. Soc. Am. B 28, 1317 (2011), 10.1364/JOSAB.28.001317; B.-I. Popa and S. A. Cummer, Phys. Rev. A 84, 063837 (2011), 10.1103/PhysRevA.84.063837], it was recently shown in [G. Castaldi, S. Savoia, V. Galdi, A. Alù, and N. Engheta, Phys. Rev. Lett. 110, 173901 (2013), 10.1103/PhysRevLett.110.173901] that a complex source point (CSP) can be mimicked by parity-time (PT ) transformation media. Such coordinate transformation has a mirror symmetry for the imaginary part, and results in a balanced loss/gain metamaterial slab. A CSP produces a Gaussian beam and, consequently, a point source placed at the center of such a metamaterial slab produces a Gaussian beam propagating away from the slab. Here, we extend the CTO analysis to nonsymmetric complex coordinate transformations as put forth in [S. Savoia, G. Castaldi, and V. Galdi, J. Opt. 18, 044027 (2016), 10.1088/2040-8978/18/4/044027] and verify that, by using simply a (homogeneous) doubly anisotropic gain-media metamaterial slab, one can still mimic a CSP and produce Gaussian beam. In addition, we show that a Gaussian-like beams can be produced by point sources placed outside the slab as well. By making use of the extra degrees of freedom (the real and imaginary parts of the coordinate transformation) provided by CTO, the near-zero requirement on the real part of the resulting constitutive parameters can be relaxed to facilitate potential realization of Gaussian-like beams. We illustrate how beam properties such as peak amplitude and waist location can be controlled by a proper choice of (complex-valued) CTO Jacobian elements. In particular, the beam waist location may be moved bidirectionally by allowing for negative entries in the Jacobian (equivalent to inducing negative refraction effects). These results are then interpreted in light of the ensuing CSP location.
Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.
Tinkelman, Igor; Melamed, Timor
2005-06-01
The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2015-03-01
Apertured averaged scintillation requires the evaluation of rather complicated irradiance covariance function. Here we develop a much simpler numerical method based on our earlier introduced semi-analytic approach. Using this method, we calculate aperture averaged scintillation of fully and partially coherent Gaussian, annular Gaussian flat topped and dark hollow beams. For comparison, the principles of equal source beam power and normalizing the aperture averaged scintillation with respect to received power are applied. Our results indicate that for fully coherent beams, upon adjusting the aperture sizes to capture 10 and 20% of the equal source power, Gaussian beam needs the largest aperture opening, yielding the lowest aperture average scintillation, whilst the opposite occurs for annular Gaussian and dark hollow beams. When assessed on the basis of received power normalized aperture averaged scintillation, fixed propagation distance and aperture size, annular Gaussian and dark hollow beams seem to have the lowest scintillation. Just like the case of point-like scintillation, partially coherent beams will offer less aperture averaged scintillation in comparison to fully coherent beams. But this performance improvement relies on larger aperture openings. Upon normalizing the aperture averaged scintillation with respect to received power, fully coherent beams become more advantageous than partially coherent ones.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
NASA Astrophysics Data System (ADS)
Wang, I. T.
A general method for determining the effective transport wind speed, overlineu, in the Gaussian plume equation is discussed. Physical arguments are given for using the generalized overlineu instead of the often adopted release-level wind speed with the plume diffusion equation. Simple analytical expressions for overlineu applicable to low-level point releases and a wide range of atmospheric conditions are developed. A non-linear plume kinematic equation is derived using these expressions. Crosswind-integrated SF 6 concentration data from the 1983 PNL tracer experiment are used to evaluate the proposed analytical procedures along with the usual approach of using the release-level wind speed. Results of the evaluation are briefly discussed.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
NASA Astrophysics Data System (ADS)
Červený, Vlastislav; Pšenčík, Ivan
2017-08-01
Integral superposition of Gaussian beams is a useful generalization of the standard ray theory. It removes some of the deficiencies of the ray theory like its failure to describe properly behaviour of waves in caustic regions. It also leads to a more efficient computation of seismic wavefields since it does not require the time-consuming two-point ray tracing. We present the formula for a high-frequency elementary Green function expressed in terms of the integral superposition of Gaussian beams for inhomogeneous, isotropic or anisotropic, layered structures, based on the dynamic ray tracing (DRT) in Cartesian coordinates. For the evaluation of the superposition formula, it is sufficient to solve the DRT in Cartesian coordinates just for the point-source initial conditions. Moreover, instead of seeking 3 × 3 paraxial matrices in Cartesian coordinates, it is sufficient to seek just 3 × 2 parts of these matrices. The presented formulae can be used for the computation of the elementary Green function corresponding to an arbitrary direct, multiply reflected/transmitted, unconverted or converted, independently propagating elementary wave of any of the three modes, P, S1 and S2. Receivers distributed along or in a vicinity of a target surface may be situated at an arbitrary part of the medium, including ray-theory shadow regions. The elementary Green function formula can be used as a basis for the computation of wavefields generated by various types of point sources (explosive, moment tensor).
Classificaiton and Discrimination of Sources with Time-Varying Frequency and Spatial Spectra
2007-04-01
sensitivity enhancement by impulse noise excision," in Proc. IEEE Nat. Radar Conf., pp. 252-256, 1997. [7] M. Turley, " Impulse noise rejection in HF...specific time-frequency points or regions, where one or more signals reside, enhances signal-to- noise ratio (SNR) and allows source discrimination and...source separation. The proposed algorithm is developed assuming deterministic signals with additive white complex Gaussian noise . 6. Estimation of FM
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
In-Situ Wave Observations in the High Resolution Air-Sea Interaction DRI
2008-09-30
Program ( CDIP ) Harvest buoy located in 204 m depth off Point Conception. The initial sea surface is assumed Gaussian and homogeneous, with spectral...of simulated sea surface elevation. Right panels: corresponding observed frequency-directional wave spectra (source: CDIP ). Upper panels: Typical
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Effect of beam types on the scintillations: a review
NASA Astrophysics Data System (ADS)
Baykal, Yahya; Eyyuboglu, Halil T.; Cai, Yangjian
2009-02-01
When different incidences are launched in atmospheric turbulence, it is known that the intensity fluctuations exhibit different characteristics. In this paper we review our work done in the evaluations of the scintillation index of general beam types when such optical beams propagate in horizontal atmospheric links in the weak fluctuations regime. Variation of scintillation indices versus the source and medium parameters are examined for flat-topped-Gaussian, cosh- Gaussian, cos-Gaussian, annular, elliptical Gaussian, circular (i.e., stigmatic) and elliptical (i.e., astigmatic) dark hollow, lowest order Bessel-Gaussian and laser array beams. For flat-topped-Gaussian beam, scintillation is larger than the single Gaussian beam scintillation, when the source sizes are much less than the Fresnel zone but becomes smaller for source sizes much larger than the Fresnel zone. Cosh-Gaussian beam has lower on-axis scintillations at smaller source sizes and longer propagation distances as compared to Gaussian beams where focusing imposes more reduction on the cosh- Gaussian beam scintillations than that of the Gaussian beam. Intensity fluctuations of a cos-Gaussian beam show favorable behaviour against a Gaussian beam at lower propagation lengths. At longer propagation lengths, annular beam becomes advantageous. In focused cases, the scintillation index of annular beam is lower than the scintillation index of Gaussian and cos-Gaussian beams starting at earlier propagation distances. Cos-Gaussian beams are advantages at relatively large source sizes while the reverse is valid for annular beams. Scintillations of a stigmatic or astigmatic dark hollow beam can be smaller when compared to stigmatic or astigmatic Gaussian, annular and flat-topped beams under conditions that are closely related to the beam parameters. Intensity fluctuation of an elliptical Gaussian beam can also be smaller than a circular Gaussian beam depending on the propagation length and the ratio of the beam waist size along the long axis to that along the short axis (i.e., astigmatism). Comparing against the fundamental Gaussian beam on equal source size and equal power basis, it is observed that the scintillation index of the lowest order Bessel-Gaussian beam is lower at large source sizes and large width parameters. However, for excessively large width parameters and beyond certain propagation lengths, the advantage of the lowest order Bessel-Gaussian beam seems to be lost. Compared to Gaussian beam, laser array beam exhibits less scintillations at long propagation ranges and at some midrange radial displacement parameters. When compared among themselves, laser array beams tend to have reduced scintillations for larger number of beamlets, longer wavelengths, midrange radial displacement parameters, intermediate Gaussian source sizes, larger inner scales and smaller outer scales of turbulence. The number of beamlets used does not seem to be so effective in this improvement of the scintillations.
Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao
2014-10-06
Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.
A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.
Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian
2017-05-01
This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.
Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response
NASA Astrophysics Data System (ADS)
Floerchinger, Stefan; Wiedemann, Urs Achim
2014-08-01
An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Generalized expression for optical source fields
NASA Astrophysics Data System (ADS)
Kamacıoğlu, Canan; Baykal, Yahya
2012-09-01
A generalized optical beam expression is developed that presents the majority of the existing optical source fields such as Bessel, Laguerre-Gaussian, dark hollow, bottle, super Gaussian, Lorentz, super-Lorentz, flat-topped, Hermite-sinusoidal-Gaussian, sinusoidal-Gaussian, annular, Gauss-Legendre, vortex, also their higher order modes with their truncated, elegant and elliptical versions. Source intensity profiles derived from the generalized optical source beam fields are checked to match the intensity profiles of many individual known beam types. Source intensities for several interesting beam combinations are presented. Our generalized optical source beam field expression can be used to examine both the source characteristics and the propagation properties of many different optical beams in a single formulation.
The effect of beamwidth on the analysis of electron-beam-induced current line scans
NASA Astrophysics Data System (ADS)
Luke, Keung L.
1995-04-01
A real electron beam has finite width, which has been almost universally ignored in electron-beam-induced current (EBIC) theories. Obvious examples are point-source-based EBIC analyses, which neglect both the finite volume of electron-hole carriers generated by an energetic electron beam of negligible width and the beamwidth when it is no longer negligible. Gaussian source-based analyses are more realistic but the beamwidth has not been included, partly because the generation volume is much larger than the beamwidth, but this is not always the case. In this article Donolato's Gaussian source-based EBIC equation is generalized to include the beamwidth of a Gaussian beam. This generalized equation is then used to study three problems: (1) the effect of beamwidth on EBIC line scans and on effective diffusion lengths and the results are applied to the analysis of the EBIC data of Dixon, Williams, Das, and Webb; (2) unresolved questions raised by others concerning the applicability of the Watanabe-Actor-Gatos method to real EBIC data to evaluate surface recombination velocity; (3) the effect of beamwidth on the methods proposed recently by the author to determine the surface recombination velocity and to discriminate between the Everhart-Hoff and Kanaya-Okayama ranges which is the correct one to use for analyzing EBIC line scans.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Modeling of dispersion near roadways based on the vehicle-induced turbulence concept
NASA Astrophysics Data System (ADS)
Sahlodin, Ali M.; Sotudeh-Gharebagh, Rahmat; Zhu, Yifang
A mathematical model is developed for dispersion near roadways by incorporating vehicle-induced turbulence (VIT) into Gaussian dispersion modeling using computational fluid dynamics (CFD). The model is based on the Gaussian plume equation in which roadway is regarded as a series of point sources. The Gaussian dispersion parameters are modified by simulation of the roadway using CFD in order to evaluate turbulent kinetic energy (TKE) as a measure of VIT. The model was evaluated against experimental carbon monoxide concentrations downwind of two major freeways reported in the literature. Good agreements were achieved between model results and the literature data. A significant difference was observed between the model results with and without considering VIT. The difference is rather high for data very close to the freeways. This model, after evaluation with additional data, may be used as a framework for predicting dispersion and deposition from any roadway for different traffic (vehicle type and speed) conditions.
Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi
2016-07-01
The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment
NASA Astrophysics Data System (ADS)
Tsang, Mankei
2018-02-01
I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.
NASA Astrophysics Data System (ADS)
Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.
2011-04-01
MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Gaussian Finite Element Method for Description of Underwater Sound Diffraction
NASA Astrophysics Data System (ADS)
Huang, Dehua
A new method for solving diffraction problems is presented in this dissertation. It is based on the use of Gaussian diffraction theory. The Rayleigh integral is used to prove the core of Gaussian theory: the diffraction field of a Gaussian is described by a Gaussian function. The parabolic approximation used by previous authors is not necessary to this proof. Comparison of the Gaussian beam expansion and Fourier series expansion reveals that the Gaussian expansion is a more general and more powerful technique. The method combines the Gaussian beam superposition technique (Wen and Breazeale, J. Acoust. Soc. Am. 83, 1752-1756 (1988)) and the Finite element solution to the parabolic equation (Huang, J. Acoust. Soc. Am. 84, 1405-1413 (1988)). Computer modeling shows that the new method is capable of solving for the sound field even in an inhomogeneous medium, whether the source is a Gaussian source or a distributed source. It can be used for horizontally layered interfaces or irregular interfaces. Calculated results are compared with experimental results by use of a recently designed and improved Gaussian transducer in a laboratory water tank. In addition, the power of the Gaussian Finite element method is demonstrated by comparing numerical results with experimental results from use of a piston transducer in a water tank.
NASA Astrophysics Data System (ADS)
Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.
2018-05-01
Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.
A Parametric Study of Fine-scale Turbulence Mixing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James; Freund, Jonathan B.
2002-01-01
The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.
NASA Technical Reports Server (NTRS)
Smith, G. L.; Green, R. N.; Young, G. R.
1974-01-01
The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.
A two-point diagnostic for the H II galaxy Hubble diagram
NASA Astrophysics Data System (ADS)
Leaf, Kyle; Melia, Fulvio
2018-03-01
A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.
NASA Astrophysics Data System (ADS)
Lee, Hye-In; Pak, Soojong; Lee, Jae-Joon; Mace, Gregory N.; Jaffe, Daniel Thomas
2017-06-01
We developed an observation control software for the IGRINS (Immersion Grating Infrared Spectrograph) silt-viewing camera module, which points the astronomical target onto the spectroscopy slit and sends tracking feedbacks to the telescope control system (TCS). The point spread function (PSF) image is not following symmetric Gaussian profile. In addition, bright targets are easily saturated and shown as a donut shape. It is not trivial to define and find the center of the asymmetric PSF especially when most of the stellar PSF falls inside the slit. We made a center balancing algorithm (CBA) which derives the expected center position along the slit-width axis by referencing the stray flux ratios of both upper and lower sides of the slit. We compared accuracies of the CBA and those of a two-dimensional Gaussian fitting (2DGA) through simulations in order to evaluate the center finding algorithms. These methods were then verified with observational data. In this poster, we present the results of our tests and suggest a new algorithm for centering targets in the slit image of a spectrograph.
Optimal pupil design for confocal microscopy
NASA Astrophysics Data System (ADS)
Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.
2010-02-01
Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.
Gaussian Decomposition of Laser Altimeter Waveforms
NASA Technical Reports Server (NTRS)
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
Cosine-Gaussian Schell-model sources.
Mei, Zhangrong; Korotkova, Olga
2013-07-15
We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1993-01-01
One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian where seeds produced by topological defects tend to be non-Gaussian. The three-point correlation function of the temperature anisotropy of the cosmic microwave background radiation (CBR) provides a sensitive test of this aspect of the primordial density field. In this paper, this function is calculated in the general context of various allowed non-Gaussian models. It is shown that the Cosmic Background Explorer and the forthcoming South Pole and balloon CBR anisotropy data may be able to provide a crucial test of the Gaussian nature of the perturbations.
NASA Astrophysics Data System (ADS)
Kelkboom, Emile J. C.; Breebaart, Jeroen; Buhan, Ileana; Veldhuis, Raymond N. J.
2010-04-01
Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the size of the key and its closeness to being random. In the literature it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work we determine the analytical relationship between the system performance and the theoretical maximum key size given a biometric source modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed across all channels and the channels are independent. We also determine the effect of the parameters such as the source capacity, the number of enrolment and verification samples, and the operating point selection on the maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and its convenience for its users.
NASA Astrophysics Data System (ADS)
Cao, Xiaochao; Fang, Feiyun; Wang, Zhaoying; Lin, Qiang
2017-10-01
We report a study on dynamical evolution of the ultrashort time-domain dark hollow Gaussian (TDHG) pulses beyond the slowly varying envelope approximation in homogenous plasma. Using the complex-source-point model, an analytical formula is proposed for describing TDHG pulses based on the oscillating electric dipoles, which is the exact solution of the Maxwell's equations. The numerical simulations show the relativistic longitudinal self-compression (RSC) due to the relativistic mass variation of moving electrons. The influences of plasma oscillation frequency and collision effect on dynamics of the TDHG pulses in plasma have been considered. Furthermore, we analyze the evolution of instantaneous energy density of the TDHG pulses on axis as well as the off axis condition.
Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang
2015-01-14
Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.
Propagation properties of cylindrical sinc Gaussian beam
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.; Bayraktar, Mert
2016-09-01
We investigate the propagation properties of cylindrical sinc Gaussian beam in turbulent atmosphere. Since an analytic solution is hardly derivable, the study is carried out with the aid of random phase screens. Evolutions of the beam intensity profile, beam size and kurtosis parameter are analysed. It is found that on the source plane, cylindrical sinc Gaussian beam has a dark hollow appearance, where the side lobes also start to emerge with increase in width parameter and Gaussian source size. During propagation, beams with small width and Gaussian source size exhibit off-axis behaviour, losing the dark hollow shape, accumulating the intensity asymmetrically on one side, whereas those with large width and Gaussian source size retain dark hollow appearance even at long propagation distances. It is seen that the beams with large widths expand more in beam size than the ones with small widths. The structure constant values chosen do not seem to alter this situation. The kurtosis parameters of the beams having small widths are seen to be larger than the ones with the small widths. Again the choice of the structure constant does not change this trend.
Assessment of DPOAE test-retest difference curves via hierarchical Gaussian processes.
Bao, Junshu; Hanson, Timothy; McMillan, Garnett P; Knight, Kristin
2017-03-01
Distortion product otoacoustic emissions (DPOAE) testing is a promising alternative to behavioral hearing tests and auditory brainstem response testing of pediatric cancer patients. The central goal of this study is to assess whether significant changes in the DPOAE frequency/emissions curve (DP-gram) occur in pediatric patients in a test-retest scenario. This is accomplished through the construction of normal reference charts, or credible regions, that DP-gram differences lie in, as well as contour probabilities that measure how abnormal (or in a certain sense rare) a test-retest difference is. A challenge is that the data were collected over varying frequencies, at different time points from baseline, and on possibly one or both ears. A hierarchical structural equation Gaussian process model is proposed to handle the different sources of correlation in the emissions measurements, wherein both subject-specific random effects and variance components governing the smoothness and variability of each child's Gaussian process are coupled together. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei
2018-02-01
For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2018-01-01
We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.
The changing source of X-ray reflection in the radio-intermediate Seyfert 1 galaxy III Zw 2
NASA Astrophysics Data System (ADS)
Gonzalez, A. G.; Waddell, S. G. H.; Gallo, L. C.
2018-03-01
We report on X-ray observations of the radio-intermediate, X-ray bright Seyfert 1 galaxy, III Zw 2, obtained with XMM-Newton, Suzaku, and Swift over the past 17 yr. The source brightness varies significantly over yearly time-scales, but more modestly over periods of days. Pointed observations with XMM-Newton in 2000 and Suzaku in 2011 show spectral differences despite comparable X-ray fluxes. The Suzaku spectra are consistent with a power-law continuum and a narrow Gaussian emission feature at ˜6.4 keV, whereas the earlier XMM-Newton spectrum requires a broader Gaussian profile and soft-excess below ˜2 keV. A potential interpretation is that the primary power-law emission, perhaps from a jet base, preferentially illuminates the inner accretion disc in 2000, but the distant torus in 2011. The interpretation could be consistent with the hypothesized precessing radio jet in III Zw 2 that may have originated from disc instabilities due to an ongoing merging event.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
New approaches to probing Minkowski functionals
NASA Astrophysics Data System (ADS)
Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.
2013-10-01
We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.
Clustering of Multispectral Airborne Laser Scanning Data Using Gaussian Decomposition
NASA Astrophysics Data System (ADS)
Morsy, S.; Shaker, A.; El-Rabbany, A.
2017-09-01
With the evolution of the LiDAR technology, multispectral airborne laser scanning systems are currently available. The first operational multispectral airborne LiDAR sensor, the Optech Titan, acquires LiDAR point clouds at three different wavelengths (1.550, 1.064, 0.532 μm), allowing the acquisition of different spectral information of land surface. Consequently, the recent studies are devoted to use the radiometric information (i.e., intensity) of the LiDAR data along with the geometric information (e.g., height) for classification purposes. In this study, a data clustering method, based on Gaussian decomposition, is presented. First, a ground filtering mechanism is applied to separate non-ground from ground points. Then, three normalized difference vegetation indices (NDVIs) are computed for both non-ground and ground points, followed by histograms construction from each NDVI. The Gaussian function model is used to decompose the histograms into a number of Gaussian components. The maximum likelihood estimate of the Gaussian components is then optimized using Expectation - Maximization algorithm. The intersection points of the adjacent Gaussian components are subsequently used as threshold values, whereas different classes can be clustered. This method is used to classify the terrain of an urban area in Oshawa, Ontario, Canada, into four main classes, namely roofs, trees, asphalt and grass. It is shown that the proposed method has achieved an overall accuracy up to 95.1 % using different NDVIs.
Consistency relations for sharp inflationary non-Gaussian features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris
If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming frommore » the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.« less
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.
1979-01-01
The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F; Park, J; Barraclough, B
2016-06-15
Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less
NASA Astrophysics Data System (ADS)
Ehret, G.; Kiemle, C.; Rapp, M.
2017-12-01
The practical implementation of the Paris Agreement (COP21) vastly profit from an independent, reliable and global measurement system of greenhouse gas emissions, in particular of CO2, in order to complement and cross-check national efforts. Most fossil-fuel CO2 emitters emanate from large sources such as cities and power plants. These emissions increase the local CO2 abundance in the atmosphere by 1-10 parts per million (ppm) which is a signal that is significantly larger than the variability from natural sources and sinks over the local source domain. Despite these large signals, they are only sparsely sampled by the ground-based network which calls for satellite measurements. However, none of the existing and forthcoming passive satellite instruments, operating in the NIR spectral domain, can measure CO2 emissions at night time or in low sunlight conditions and in high latitude regions in winter times. The resulting sparse coverage of passive spectrometers is a serious limitation, particularly for the Northern Hemisphere, since these regions exhibit substantial emissions during the winter as well as other times of the year. In contrast, CO2 measurements by an Integrated Path Differential Absorption (IPDA) Lidar are largely immune to these limitations and initial results from airborne application look promising. In this study, we discuss the implication for a space-borne IPDA Lidar system. A Gaussian plume model will be used to simulate the CO2-distribution of large power plants downstream to the source. The space-borne measurements are simulated by applying a simple forward model based on Gaussian error distribution. Besides the sampling frequency, the sampling geometry (e.g. measurement distance to the emitting source) and the error of the measurement itself vastly impact on the flux inversion performance. We will discuss the results by incorporating Gaussian plume and mass budget approaches to quantify the emission rates.
Electrical source of pseudothermal light
NASA Astrophysics Data System (ADS)
Kuusela, Tom A.
2018-06-01
We describe a simple and compact electrical version of a pseudothermal light source. The source is based on electrical white noise whose spectral properties are tailored by analog filters. This signal is used to drive a light-emitting diode. The type of second-order coherence of the output light can be either Gaussian or Lorentzian, and the intensity distribution can be either Gaussian or non-Gaussian. The output light field is similar in all viewing angles, and thus, there is no need for a small aperture or optical fiber in temporal coherence analysis.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friberg, Ari T.; Visser, Taco D.; Wolf, Emil
A reciprocity inequality is derived, involving the effective size of a planar, secondary, Gaussian Schell-model source and the effective angular spread of the beam that the source generates. The analysis is shown to imply that a fully spatially coherent source of that class (which generates the lowest-order Hermite-Gaussian laser mode) has certain minimal properties. (c) 2000 Optical Society of America.
The point-spread function of fiber-coupled area detectors
Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.
2012-01-01
The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.; Baykal, Yahya; Çil, Celal Z.; Korotkova, Olga; Cai, Yangjian
2010-02-01
In this paper we review our work done in the evaluations of the root mean square (rms) beam wander characteristics of the flat-topped, dark hollow, cos-and cosh Gaussian, J0-Bessel Gaussian and the I0-Bessel Gaussian beams in atmospheric turbulence. Our formulation is based on the wave-treatment approach, where not only the beam sizes but the source beam profiles are taken into account as well. In this approach the first and the second statistical moments are obtained from the Rytov series under weak atmospheric turbulence conditions and the beam size are determined as a function of the propagation distance. It is found that after propagating in atmospheric turbulence, under certain conditions, the collimated flat-topped, dark hollow, cos- and cosh Gaussian, J0-Bessel Gaussian and the I0-Bessel Gaussian beams have smaller rms beam wander compared to that of the Gaussian beam. The beam wander of these beams are analyzed against the propagation distance, source spot sizes, and against specific beam parameters related to the individual beam such as the relative amplitude factors of the constituent beams, the flatness parameters, the beam orders, the displacement parameters, the width parameters, and are compared against the corresponding Gaussian beam.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media
NASA Astrophysics Data System (ADS)
Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen
2018-06-01
A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.
Uncertainty Propagation for Terrestrial Mobile Laser Scanner
NASA Astrophysics Data System (ADS)
Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas
2016-06-01
Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.
Complete stability of delayed recurrent neural networks with Gaussian activation functions.
Liu, Peng; Zeng, Zhigang; Wang, Jun
2017-01-01
This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3 k equilibrium points with 0≤k≤n, among which 2 k and 3 k -2 k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
NASA Astrophysics Data System (ADS)
Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula
2018-01-01
The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
Jet Noise Physics and Modeling Using First-principles Simulations
NASA Technical Reports Server (NTRS)
Freund, Jonathan B.
2003-01-01
An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.
Beam wander of dark hollow, flat-topped and annular beams
NASA Astrophysics Data System (ADS)
Eyyuboğlu, H. T.; Çil, C. Z.
2008-11-01
Benefiting from the earlier derivations for the Gaussian beam, we formulate beam wander for dark hollow (DH) and flat-topped (FT) beams, also covering the annular Gaussian (AG) beam as a special case. Via graphical illustrations, beam wander variations of these beams are analyzed and compared among themselves and to the fundamental Gaussian beam against changes in propagation length, amplitude factor, source size, wavelength of operation, inner and outer scales of turbulence. These comparisons show that in relation to the fundamental Gaussian beam, DH and FT beams will exhibit less beam wander, particularly at small primary beam source sizes, lower amplitude factors of the secondary beam and higher beam orders. Furthermore, DH and FT beams will continue to preserve this advantageous position all throughout the considered range of wavelengths, inner and outer scales of turbulence. FT beams, in particular, are observed to have the smallest beam wander values among all, up to certain source sizes.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Contaminant transport from point source on water surface in open channel flow with bed absorption
NASA Astrophysics Data System (ADS)
Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian
2018-06-01
Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.
Scattering of aerosol particles by a Hermite-Gaussian beam in marine atmosphere.
Huang, Qingqing; Cheng, Mingjian; Guo, Lixin; Li, Jiangting; Yan, Xu; Liu, Songhua
2017-07-01
Based on the complex-source-point method and the generalized Lorenz-Mie theory, the scattering properties and polarization of aerosol particles by a Hermite-Gaussian (HG) beam in marine atmosphere is investigated. The influences of beam mode, beam width, and humidity on the scattered field are analyzed numerically. Results indicate that when the number of HG beam modes u (v) increase, the radar cross section of aerosol particles alternating appears at maximum and minimum values in the forward and backward scattering, respectively, because of the special petal-shaped distribution of the HG beam. The forward and backward scattering of aerosol particles decreases with the increase in beam waist. When beam waist is less than the radius of the aerosol particle, a minimum value is observed in the forward direction. The scattering properties of aerosol particles by the HG beam are more sensitive to the change in relative humidity compared with those by the plane wave and the Gaussian beam (GB). The HG beam shows superiority over the plane wave and the GB in detecting changes in the relative humidity of marine atmosphere aerosol. The effects of relative humidity on the polarization of the HG beam have been numerically analyzed in detail.
NASA Astrophysics Data System (ADS)
Monfared, Yashar E.; Ponomarenko, Sergey A.
2017-10-01
We explore theoretically and numerically extreme event excitation in stimulated Raman scattering in gases. We consider gas-filled hollow-core photonic crystal fibers as a particular system realization. We show that moderate amplitude pump fluctuations obeying Gaussian statistics lead to the emergence of heavy-tailed non-Gaussian statistics as coherent seed Stokes pulses are amplified on propagation along the fiber. We reveal the crucial role that coherent memory effects play in causing non-Gaussian statistics of the system. We discover that extreme events can occur even at the initial stage of stimulated Raman scattering when one can neglect energy depletion of an intense, strongly fluctuating Gaussian pump source. Our analytical results in the undepleted pump approximation explicitly illustrate power-law probability density generation as the input pump noise is transferred to the output Stokes pulses.
Distillation of squeezing from non-Gaussian quantum states.
Heersink, J; Marquardt, Ch; Dong, R; Filip, R; Lorenz, S; Leuchs, G; Andersen, U L
2006-06-30
We show that single copy distillation of squeezing from continuous variable non-Gaussian states is possible using linear optics and conditional homodyne detection. A specific non-Gaussian noise source, corresponding to a random linear displacement, is investigated experimentally. Conditioning the signal on a tap measurement, we observe probabilistic recovery of squeezing.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
Analysis of low altitude atmospheric turbulence data measured in flight
NASA Technical Reports Server (NTRS)
Ganzer, V. M.; Joppa, R. G.; Vanderwees, G.
1977-01-01
All three components of turbulence were measured simultaneously in flight at each wing tip of a Beech D-18 aircraft. The flights were conducted at low altitude, 30.5 - 61.0 meters (100-200 ft.), over water in the presence of wind driven turbulence. Statistical properties of flight measured turbulence were compared with Gaussian and non-Gaussian turbulence models. Spatial characteristics of the turbulence were analyzed using the data from flight perpendicular and parallel to the wind. The probability density distributions of the vertical gusts show distinctly non-Gaussian characteristics. The distributions of the longitudinal and lateral gusts are generally Gaussian. The power spectra compare in the inertial subrange at some points better with the Dryden spectrum, while at other points the von Karman spectrum is a better approximation. In the low frequency range the data show peaks or dips in the power spectral density. The cross between vertical gusts in the direction of the mean wind were compared with a matched non-Gaussian model. The real component of the cross spectrum is in general close to the non-Gaussian model. The imaginary component, however, indicated a larger phase shift between these two gust components than was found in previous research.
NASA Astrophysics Data System (ADS)
Edie, R.; Robertson, A.; Murphy, S. M.; Soltis, J.; Field, R. A.; Zimmerle, D.; Bell, C.
2017-12-01
Other Test Method 33a (OTM-33a) is an EPA-developed near-source measurement technique that utilizes a Gaussian plume inversion to calculate the flux of a point source 20 to 200 meters away. In 2014, the University of Wyoming mobile laboratory—equipped with a Picarro methane analyzer and an Ionicon Proton Transfer Reaction Time of Flight Mass Spectrometer—measured methane and BTEX fluxes from oil and gas operations in the Upper Green River Basin (UGRB), Wyoming. In this study, OTM-33a BTEX flux measurements are compared to BTEX emissions reported by operators in the Wyoming Department of Environmental Quality (WY-DEQ) emission inventory. On average, OTM-33a measured BTEX fluxes are almost twice as high as those reported in the emission inventory. To further constrain errors in the OTM-33a method, methane test releases were performed at the Colorado State University Methane Emissions Test and Evaluation Center (METEC) in June of 2017. The METEC facility contains decommissioned oil and gas equipment arranged in realistic well pad layouts. Each piece of equipment has a multitude of possible emission points. A Gaussian fit of measurement error from these 29 test releases indicate the median OTM-33a measurement quantified 55% of the metered flowrate. BTEX results from the UGRB campaign and inventory analysis will be presented, along with a discussion of errors associated with the OTM-33a measurement technique. Real-time BTEX and methane mixing ratios at the measurement locations (which show a lack of correlation between VOC and methane sources in 20% of sites sampled) will also be discussed.
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
Continuous-variable quantum key distribution with Gaussian source noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yujie; Peng Xiang; Yang Jian
2011-05-15
Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.
NASA Astrophysics Data System (ADS)
Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.
2015-08-01
A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.
Dynamical heterogeneities of cold 2D Yukawa liquids
NASA Astrophysics Data System (ADS)
Wang, Kang; Huang, Dong; Feng, Yan
2018-06-01
Dynamical heterogeneities of 2D liquid dusty plasmas at different temperatures are investigated systematically using Langevin dynamical simulations. From the simulated trajectories, various heterogeneity measures have been calculated, such as the distance matrix, the averaged squared displacement, the non-Gaussian parameter, and the four-point susceptibility. It is found that, for 2D Yukawa liquids, both spatial and temporal heterogeneities in dynamics are more severe at a lower temperature near the melting point. For various temperatures, the calculated non-Gaussian parameter of 2D Yukawa liquids contains two peaks at different times, indicating the most heterogeneous dynamics, which are attributed to the transition of different motions and the α relaxation time, respectively. In the diffusive motion, the most heterogeneous dynamics for a colder Yukawa liquid happen more slowly, as indicated by both the non-Gaussian parameter and the four-point susceptibility.
Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost
NASA Astrophysics Data System (ADS)
Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.
2017-11-01
A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.
Elegant Gaussian beams for enhanced optical manipulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpmann, Christina, E-mail: c.alpmann@uni-muenster.de; Schöler, Christoph; Denz, Cornelia
2015-06-15
Generation of micro- and nanostructured complex light beams attains increasing impact in photonics and laser applications. In this contribution, we demonstrate the implementation and experimental realization of the relatively unknown, but highly versatile class of complex-valued Elegant Hermite- and Laguerre-Gaussian beams. These beams create higher trapping forces compared to standard Gaussian light fields due to their propagation changing properties. We demonstrate optical trapping and alignment of complex functional particles as nanocontainers with standard and Elegant Gaussian light beams. Elegant Gaussian beams will inspire manifold applications in optical manipulation, direct laser writing, or microscopy, where the design of the point-spread functionmore » is relevant.« less
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Spiga, D
2018-01-01
X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.
Using Gaussian windows to explore a multivariate data set
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1991-01-01
In an earlier paper, I recounted an exploratory analysis, using Gaussian windows, of a data set derived from the Infrared Astronomical Satellite. Here, my goals are to develop strategies for finding structural features in a data set in a many-dimensional space, and to find ways to describe the shape of such a data set. After a brief review of Gaussian windows, I describe the current implementation of the method. I give some ways of describing features that we might find in the data, such as clusters and saddle points, and also extended structures such as a 'bar', which is an essentially one-dimensional concentration of data points. I then define a distance function, which I use to determine which data points are 'associated' with a feature. Data points not associated with any feature are called 'outliers'. I then explore the data set, giving the strategies that I used and quantitative descriptions of the features that I found, including clusters, bars, and a saddle point. I tried to use strategies and procedures that could, in principle, be used in any number of dimensions.
Renormalization group fixed points of foliated gravity-matter systems
NASA Astrophysics Data System (ADS)
Biemans, Jorn; Platania, Alessia; Saueressig, Frank
2017-05-01
We employ the Arnowitt-Deser-Misner formalism to study the renormalization group flow of gravity minimally coupled to an arbitrary number of scalar, vector, and Dirac fields. The decomposition of the gravitational degrees of freedom into a lapse function, shift vector, and spatial metric equips spacetime with a preferred (Euclidean) "time"- direction. In this work, we provide a detailed derivation of the renormalization group flow of Newton's constant and the cosmological constant on a flat Friedmann-Robertson-Walker background. Adding matter fields, it is shown that their contribution to the flow is the same as in the covariant formulation and can be captured by two parameters d g d λ . We classify the resulting fixed point structure as a function of these parameters finding that the existence of non-Gaussian renormalization group fixed points is rather generic. In particular the matter content of the standard model and its most common extensions gives rise to one non-Gaussian fixed point with real critical exponents suitable for Asymptotic Safety. Moreover, we find non-Gaussian fixed points for any number of scalar matter fields, making the scenario attractive for cosmological model building.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Generally astigmatic Gaussian beam representation and optimization using skew rays
NASA Astrophysics Data System (ADS)
Colbourne, Paul D.
2014-12-01
Methods are presented of using skew rays to optimize a generally astigmatic optical system to obtain the desired Gaussian beam focus and minimize aberrations, and to calculate the propagating generally astigmatic Gaussian beam parameters at any point. The optimization method requires very little computation beyond that of a conventional ray optimization, and requires no explicit calculation of the properties of the propagating Gaussian beam. Unlike previous methods, the calculation of beam parameters does not require matrix calculations or the introduction of non-physical concepts such as imaginary rays.
Study of atmospheric diffusion using LANDSAT
NASA Technical Reports Server (NTRS)
Torsani, J. A.; Viswanadham, Y.
1982-01-01
The parameters of diffusion patterns of atmospheric pollutants under different conditions were investigated for use in the Gaussian model for calculation of pollution concentration. Value for the divergence pattern of concentration distribution along the Y axis were determined using LANDSAT images. Multispectral scanner images of a point source plume having known characteristics, wind and temperature data, and cloud cover and solar elevation data provided by LANDSAT, were analyzed using the 1-100 system for image analysis. These measured values are compared with pollution transport as predicted by the Pasquill-Gifford, Juelich, and Hoegstroem atmospheric models.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
NASA Astrophysics Data System (ADS)
Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.
2015-05-01
We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.
NASA Astrophysics Data System (ADS)
Baumgart, Marcus; Tortschanoff, Andreas
2013-05-01
A tilt mirror's deflection angle tracking setup is examined from a theoretical point of view. The proposed setup is based on a simple optical approach and easily scalable. Thus, the principle is especially of interest for small and fast oscillating MEMS/MOEMS based tilt mirrors. An experimentally established optical scheme is used as a starting point for accurate and fast mirror angle-position detection. This approach uses an additional layer, positioned under the MOEMS mirror's backside, consisting of a light source in the center and two photodetectors positioned symmetrical around the center. The mirror's back surface is illuminated by the light source and the intensity change due to mirror tilting is tracked via the photodiodes. The challenge of this method is to get a linear relation between the measured intensity and the current mirror tilt angle even for larger angles. State-of-the-art MOEMS mirrors achieve angles up to ±30°, which exceeds the linear angle approximations. The use of an LED, small laser diode or VCSEL as a lightsource is appropriate due to their small size and inexpensive price. Those light sources typically emit light with a Gaussian intensity distribution. This makes an analytical prediction of the expected detector signal quite complicated. In this publication an analytical simulation model is developed to evaluate the influence of the main parameters for this optical mirror tilt-sensor design. An easy and fast to calculate value directly linked to the mirror's tilt-angle is the "relative differential intensity" (RDI = (I1 - I2) / (I1 + I2)). Evaluation of its slope and nonlinear error highlights dependencies between the identified parameters for best SNR and linearity. Also the energy amount covering the detector area is taken into account. Design optimizing rules are proposed and discussed based on theoretical considerations.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Regular black holes from semi-classical down to Planckian size
NASA Astrophysics Data System (ADS)
Spallucci, Euro; Smailagic, Anais
In this paper, we review various models of curvature singularity free black holes (BHs). In the first part of the review, we describe semi-classical solutions of the Einstein equations which, however, contains a “quantum” input through the matter source. We start by reviewing the early model by Bardeen where the metric is regularized by-hand through a short-distance cutoff, which is justified in terms of nonlinear electro-dynamical effects. This toy-model is useful to point-out the common features shared by all regular semi-classical black holes. Then, we solve Einstein equations with a Gaussian source encoding the quantum spread of an elementary particle. We identify, the a priori arbitrary, Gaussian width with the Compton wavelength of the quantum particle. This Compton-Gauss model leads to the estimate of a terminal density that a gravitationally collapsed object can achieve. We identify this density to be the Planck density, and reformulate the Gaussian model assuming this as its peak density. All these models, are physically reliable as long as the BH mass is big enough with respect to the Planck mass. In the truly Planckian regime, the semi-classical approximation breaks down. In this case, a fully quantum BH description is needed. In the last part of this paper, we propose a nongeometrical quantum model of Planckian BHs implementing the Holographic Principle and realizing the “classicalization” scenario recently introduced by Dvali and collaborators. The classical relation between the mass and radius of the BH emerges only in the classical limit, far away from the Planck scale.
Hermite-cosine-Gaussian laser beam and its propagation characteristics in turbulent atmosphere.
Eyyuboğlu, Halil Tanyer
2005-08-01
Hermite-cosine-Gaussian (HcosG) laser beams are studied. The source plane intensity of the HcosG beam is introduced and its dependence on the source parameters is examined. By application of the Fresnel diffraction integral, the average receiver intensity of HcosG beam is formulated for the case of propagation in turbulent atmosphere. The average receiver intensity is seen to reduce appropriately to various special cases. When traveling in turbulence, the HcosG beam initially experiences the merging of neighboring beam lobes, and then a TEM-type cosh-Gaussian beam is formed, temporarily leading to a plain cosh-Gaussian beam. Eventually a pure Gaussian beam results. The numerical evaluation of the normalized beam size along the propagation axis at selected mode indices indicates that relative spreading of higher-order HcosG beam modes is less than that of the lower-order counterparts. Consequently, it is possible at some propagation distances to capture more power by using higher-mode-indexed HcosG beams.
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Evans, Philip G.; Grice, Warren P.
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution
Qi, Bing; Evans, Philip G.; Grice, Warren P.
2018-01-01
In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less
NASA Astrophysics Data System (ADS)
Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua
2017-03-01
A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.
NASA Astrophysics Data System (ADS)
Delpueyo, D.; Balandraud, X.; Grédiac, M.
2013-09-01
The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.
Microlensing as a possible probe of event-horizon structure in quasars
NASA Astrophysics Data System (ADS)
Tomozeiu, Mihai; Mohammed, Irshad; Rabold, Manuel; Saha, Prasenjit; Wambsganss, Joachim
2018-04-01
In quasars which are lensed by galaxies, the point-like images sometimes show sharp and uncorrelated brightness variations (microlensing). These brightness changes are associated with the innermost region of the quasar passing through a complicated pattern of caustics produced by the stars in the lensing galaxy. In this paper, we study whether the universal properties of optical caustics could enable extraction of shape information about the central engine of quasars. We present a toy model with a crescent-shaped source crossing a fold caustic. The silhouette of a black hole over an accretion disc tends to produce roughly crescent sources. When a crescent-shaped source crosses a fold caustic, the resulting light curve is noticeably different from the case of a circular luminosity profile or Gaussian source. With good enough monitoring data, the crescent parameters, apart from one degeneracy, can be recovered.
Microlensing as a Possible Probe of Event-Horizon Structure in Quasars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomozeiu, Mihai; Mohammed, Irshad; Rabold, Manuel
In quasars which are lensed by galaxies, the point-like images sometimes show sharp and uncorrelated brightness variations (microlensing). These brightness changes are associated with the innermost region of the quasar passing through a complicated pattern of caustics produced by the stars in the lensing galaxy. In this paper, we study whether the universal properties of optical caustics could enable extraction of shape information about the central engine of quasars. We present a toy model with a crescent-shaped source crossing a fold caustic. The silhouette of a black hole over an accretion disk tends to produce roughly crescent sources. When amore » crescent-shaped source crosses a fold caustic, the resulting light curve is noticeably different from the case of a circular luminosity profile or Gaussian source. With good enough monitoring data, the crescent parameters, apart from one degeneracy, can be recovered.« less
Microlensing as a Possible Probe of Event-Horizon Structure in Quasars
Tomozeiu, Mihai; Mohammed, Irshad; Rabold, Manuel; ...
2017-12-08
In quasars which are lensed by galaxies, the point-like images sometimes show sharp and uncorrelated brightness variations (microlensing). These brightness changes are associated with the innermost region of the quasar passing through a complicated pattern of caustics produced by the stars in the lensing galaxy. In this paper, we study whether the universal properties of optical caustics could enable extraction of shape information about the central engine of quasars. We present a toy model with a crescent-shaped source crossing a fold caustic. The silhouette of a black hole over an accretion disk tends to produce roughly crescent sources. When amore » crescent-shaped source crosses a fold caustic, the resulting light curve is noticeably different from the case of a circular luminosity profile or Gaussian source. With good enough monitoring data, the crescent parameters, apart from one degeneracy, can be recovered.« less
Photoacoustic effect generated by moving optical sources: Motion in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Wenyu; Diebold, Gerald J.
2016-03-28
Although the photoacoustic effect is typically generated by pulsed or amplitude modulated optical beams, it is clear from examination of the wave equation for pressure that motion of an optical source in space will result in the production of sound as well. Here, the properties of the photoacoustic effect generated by moving sources in one dimension are investigated. The cases of a moving Gaussian beam, an oscillating delta function source, and an accelerating Gaussian optical sources are reported. The salient feature of one-dimensional sources in the linear acoustic limit is that the amplitude of the beam increases in time withoutmore » bound.« less
Report on 3 and 4-point correlation statistics in the COBE DMR anisotrophy maps
NASA Technical Reports Server (NTRS)
Hinshaw, Gary (Principal Investigator); Gorski, Krzystof M.; Banday, Anthony J.; Bennett, Charles L.
1996-01-01
As part of the work performed under NASA contract # NAS5-32648, we have computed the 3-point and 4-point correlation functions of the COBE-DNIR 2-year and 4-year anisotropy maps. The motivation for this study was to search for evidence of non-Gaussian statistical fluctuations in the temperature maps: skewness or asymmetry in the case of the 3-point function, kurtosis in the case of the 4-point function. Such behavior would have very significant implications for our understanding of the processes of galaxy formation, because our current models of galaxy formation predict that non-Gaussian features should not be present in the DMR maps. The results of our work showed that the 3-point correlation function is consistent with zero and that the 4-point function is not a very sensitive probe of non-Gaussian behavior in the COBE-DMR data. Our computation and analysis of 3-point correlations in the 2-year DMR maps was published in the Astrophysical Journal Letters, volume 446, page L67, 1995. Our computation and analysis of 3-point correlations in the 4-year DMR maps will be published, together with some additional tests, in the June 10, 1996 issue of the Astrophysical Journal Letters. Copies of both of these papers are attached as an appendix to this report.
NASA Astrophysics Data System (ADS)
Ajitanand, N. N.; Phenix Collaboration
2014-11-01
Two-pion interferometry measurements in d +Au and Au + Au collisions at √{sNN} = 200 GeV are used to extract and compare the Gaussian source radii Rout, Rside and Rlong, which characterize the space-time extent of the emission sources. The comparisons, which are performed as a function of collision centrality and the mean transverse momentum for pion pairs, indicate strikingly similar patterns for the d +Au and Au + Au systems. They also indicate a linear dependence of Rside on the initial transverse geometric size R bar , as well as a smaller freeze-out size for the d +Au system. These patterns point to the important role of final-state re-scattering effects in the reaction dynamics of d +Au collisions.
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
Vector spherical quasi-Gaussian vortex beams
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2014-02-01
Model equations for describing and efficiently computing the radiation profiles of tightly spherically focused higher-order electromagnetic beams of vortex nature are derived stemming from a vectorial analysis with the complex-source-point method. This solution, termed as a high-order quasi-Gaussian (qG) vortex beam, exactly satisfies the vector Helmholtz and Maxwell's equations. It is characterized by a nonzero integer degree and order (n,m), respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and an azimuthal phase dependency in the form of a complex exponential corresponding to a vortex beam. An attractive feature of the high-order solution is the rigorous description of strongly focused (or strongly divergent) vortex wave fields without the need of either the higher-order corrections or the numerically intensive methods. Closed-form expressions and computational results illustrate the analysis and some properties of the high-order qG vortex beams based on the axial and transverse polarization schemes of the vector potentials with emphasis on the beam waist.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Three-Point Correlations in the COBE DMR 2 Year Anisotropy Maps
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Banday, A. J.; Bennett, C. L.; Gorski, K. M.; Kogut, A.
1995-01-01
We compute the three-point temperature correlation function of the COBE Differential Microwave Radiometer (DMR) 2 year sky maps to search for evidence of non-Gaussian temperature fluctuations. We detect three-point correlations in our sky with a substantially higher signal-to-noise ratio than from the first-year data. However, the magnitude of the signal is consistent with the level of cosmic variance expected from Gaussian fluctuations, even when the low-order multipole moments, up to l = 9, are filtered from the data. These results do not strongly constrain most existing models of structure formation, but the absence of intrinsic three-point correlations on large angular scales is an important consistency test for such models.
Chialvo, Ariel A.; Vlcek, Lukas
2014-11-01
We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.; Akiyama, T. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Data sets for corn, soybeans, winter wheat, and spring wheat were used to evaluate the following schemes for crop identification: (1) per point Gaussian maximum classifier; (2) per point sum of normal densities classifiers; (3) per point linear classifier; (4) per point Gaussian maximum likelihood decision tree classifiers; and (5) texture sensitive per field Gaussian maximum likelihood classifier. Test site location and classifier both had significant effects on classification accuracy of small grains; classifiers did not differ significantly in overall accuracy, with the majority of the difference among classifiers being attributed to training method rather than to the classification algorithm applied. The complexity of use and computer costs for the classifiers varied significantly. A linear classification rule which assigns each pixel to the class whose mean is closest in Euclidean distance was the easiest for the analyst and cost the least per classification.
The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts
NASA Astrophysics Data System (ADS)
Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel
2015-01-01
Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129
Regression Models for the Analysis of Longitudinal Gaussian Data from Multiple Sources
O’Brien, Liam M.; Fitzmaurice, Garrett M.
2006-01-01
We present a regression model for the joint analysis of longitudinal multiple source Gaussian data. Longitudinal multiple source data arise when repeated measurements are taken from two or more sources, and each source provides a measure of the same underlying variable and on the same scale. This type of data generally produces a relatively large number of observations per subject; thus estimation of an unstructured covariance matrix often may not be possible. We consider two methods by which parsimonious models for the covariance can be obtained for longitudinal multiple source data. The methods are illustrated with an example of multiple informant data arising from a longitudinal interventional trial in psychiatry. PMID:15726666
Lensing of the CMB: non-Gaussian aspects.
Zaldarriaga, M
2001-06-01
We compute the small angle limit of the three- and four-point function of the cosmic microwave background (CMB) temperature induced by the gravitational lensing effect by the large-scale structure of the universe. We relate the non-Gaussian aspects presented in this paper with those in our previous studies of the lensing effects. We interpret the statistics proposed in previous work in terms of different configurations of the four-point function and show how they relate to the statistic that maximizes the S/N.
Gaussian process based independent analysis for temporal source separation in fMRI.
Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole
2017-05-15
Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its temporal nature. fMRI source signals, biological stimuli and non-stimuli-related artifacts are all smooth over a time-scale compatible with the sampling time (TR). We therefore propose Gaussian process ICA (GPICA), which facilitates temporal dependency by the use of Gaussian process source priors. On two fMRI data sets with different sampling frequency, we show that the GPICA-inferred temporal components and associated spatial maps allow for a more definite interpretation than standard temporal ICA methods. The temporal structures of the sources are controlled by the covariance of the Gaussian process, specified by a kernel function with an interpretable and controllable temporal length scale parameter. We propose a hierarchical model specification, considering both instantaneous and convolutive mixing, and we infer source spatial maps, temporal patterns and temporal length scale parameters by Markov Chain Monte Carlo. A companion implementation made as a plug-in for SPM can be downloaded from https://github.com/dittehald/GPICA. Copyright © 2017 Elsevier Inc. All rights reserved.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Kota, V K B; Chavda, N D; Sahu, R
2006-04-01
Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.
1989-07-14
blunder-free DEMs, the manual verification group of Ground Control Points (GCPs); (ii) and editing stage is still needed. the orbit and attitude of... control the influence (weight) of the sources and model the that each information class only had one subclass! To set non -Gaussian data. When the...clear line of infinitesimally close laminae sliding parallel to a shear line of shear; a quasi -brittle inner regime and a non -linear viscous outer
The statistics of peaks of Gaussian random fields. [cosmological density fluctuations
NASA Technical Reports Server (NTRS)
Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.
1986-01-01
A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.
Topological transformation of fractional optical vortex beams using computer generated holograms
NASA Astrophysics Data System (ADS)
Maji, Satyajit; Brundavanam, Maruthi M.
2018-04-01
Optical vortex beams with fractional topological charges (TCs) are generated by the diffraction of a Gaussian beam using computer generated holograms embedded with mixed screw-edge dislocations. When the input Gaussian beam has a finite wave-front curvature, the generated fractional vortex beams show distinct topological transformations in comparison to the integer charge optical vortices. The topological transformations at different fractional TCs are investigated through the birth and evolution of the points of phase singularity, the azimuthal momentum transformation, occurrence of critical points in the transverse momentum and the vorticity around the singular points. This study is helpful to achieve better control in optical micro-manipulation applications.
NASA Astrophysics Data System (ADS)
Luo, Chun-Ling; Zhuo, Ling-Qing
2017-01-01
Imaging through atmospheric turbulence is a topic with a long history and grand challenges still exist in the remote sensing and astro observation fields. In this letter, we try to propose a simple scheme to improve the resolution of imaging through turbulence based on the computational ghost imaging (CGI) and computational ghost diffraction (CGD) setup via the laser beam shaping techniques. A unified theory of CGI and CGD through turbulence with the multi-Gaussian shaped incoherent source is developed, and numerical examples are given to see clearly the effects of the system parameters to CGI and CGD. Our results show that the atmospheric effect to the CGI and CGD system is closely related to the propagation distance between the source and the object. In addition, by properly increasing the beam order of the multi-Gaussian source, we can improve the resolution of CGI and CGD through turbulence relative to the commonly used Gaussian source. Therefore our results may find applications in remote sensing and astro observation.
Spatial Structure in the Infrared Spectra of Three Evolved Stars
NASA Astrophysics Data System (ADS)
Sloan, G. C.; Tandy, P. C.; Pirger, B. E.; Hodge, T. M.
1993-05-01
We have spatially resolved three evolved sources using GLADYS, a long-slit 10 microns spectrometer, at the Wyoming Infrared Observatory. These observations, made in 1993 March, were the first for GLADYS after a complete replacement of the detector drive electronics, ADCs, and hardware co-adder. We studied each source in a north/south and an east/west slit orientation. For each set of observations, we fit a gaussian to the spatial profile at each wavelength to create a spatiogram, or plot of the width of the spectrum as a function of wavelength. In both slit orientations, the spatiogram of alpha Orionis is widest at 10 microns, where the contribution from the silicate dust in the circumstellar shell is strongest. The FWHM at 10 microns is 2.0 arcsec, while our point-source comparison has a FWHM of 1.6 arcsec. These results are very similar to those presented for a N/S slit by Grasdalen, Sloan, and LeVan (1992, ApJ, 384, L25). IRC+10216 is also resolved in both slit orientations, having a FWHM of 1.9 arcsec at 11 microns, compared with 1.5 arcsec for a point source. No spectral structure is apparent in the spatiograms, indicating that there is little change in the spectral character of the emission across the source. AFGL 2688 (the Cygnus Egg) is clearly resolved in the N/S slit orientation, where its FWHM at 11 microns is 2.2 arcsec, but its spatiogram in the E/W slit orientation is barely distinguishable from that of a point source.
Pregger, Thomas; Friedrich, Rainer
2009-02-01
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.
Classical electromagnetic fields from quantum sources in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Holliday, Robert; McCarty, Ryan; Peroutka, Balthazar; Tuchin, Kirill
2017-01-01
Electromagnetic fields are generated in high energy nuclear collisions by spectator valence protons. These fields are traditionally computed by integrating the Maxwell equations with point sources. One might expect that such an approach is valid at distances much larger than the proton size and thus such a classical approach should work well for almost the entire interaction region in the case of heavy nuclei. We argue that, in fact, the contrary is true: due to the quantum diffusion of the proton wave function, the classical approximation breaks down at distances of the order of the system size. We compute the electromagnetic field created by a charged particle described initially as a Gaussian wave packet of width 1 fm and evolving in vacuum according to the Klein-Gordon equation. We completely neglect the medium effects. We show that the dynamics, magnitude and even sign of the electromagnetic field created by classical and quantum sources are different.
Simulation of time series by distorted Gaussian processes
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1977-01-01
Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.
Photoacoustic Effect Generated from an Expanding Spherical Source
NASA Astrophysics Data System (ADS)
Bai, Wenyu; Diebold, Gerald J.
2018-02-01
Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.
NASA Astrophysics Data System (ADS)
Tarigan, A. P. M.; Suryati, I.; Gusrianti, D.
2018-03-01
The Purpose of this study is to model the spatial distribution of transportation induced carbon monoxide (CO) from a street, i.e. Jl. Singamangaraja, in Medan City using the gaussian line source method with GIS. It is observed that the traffic volume on the Jl. Singamangaraja is 7,591 units/hour in the morning and 7,433 units/hour in the afternoon. The amount emission rate is 49,171.7 µg/m.s in the morning and 46,943.1 µg/m.s in the afternoon. Based on the gaussian line source method, the highest CO concentration is found at the roadside, i.e. 20,340 µg/Nm3 in the morning and 18,340 µg/Nm3 in the afternoon, which are fairly in agreement with those measured in situ. Using GIS, the CO spatial distribution can visually be modeled to observe the affected area.
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
NASA Astrophysics Data System (ADS)
Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.
2012-03-01
Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
User's guide for RAM. Volume II. Data preparation and listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D.B.; Novak, J.H.
1978-11-01
The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less
Effect of central obscuration on the LDR point spread function
NASA Technical Reports Server (NTRS)
Vanzyl, Jakob J.
1988-01-01
It is well known that Gaussian apodization of an aperture reduces the sidelobe levels of its point spread function (PSF). In the limit where the standard deviation of the Gaussian function is much smaller than the diameter of the aperture, the sidelobes completely disappear. However, when Gaussian apodization is applied to the Large Deployable Reflector (LDR) array consisting of 84 hexagonal panels, it is found that the sidelobe level only decreases by about 2.5 dB. The reason for this is explained. The PSF is shown for an array consisting of 91 uniformly illuminated hexagonal apertures; this array is identical to the LDR array, except that the central hole in the LDR array is filled with seven additional panels. For comparison, the PSF of the uniformly illuminated LDR array is shown. Notice that it is already evident that the sidelobe structure of the LDR array is different from that of the full array of 91 panels. The PSF's of the same two arrays are shown, but with the illumination apodized with a Gaussian function to have 20 dB tapering at the edges of the arrays. While the sidelobes of the full array have decreased dramatically, those of the LDR array changed in structure, but stayed at almost the same level. This result is not completely surprising, since the Gaussian apodization tends to emphasize the contributions from the central portion of the array; exactly where the hole in the LDR array is located. The two most important conclusions are: the size of the central hole should be minimized, and a simple Gaussian apodization scheme to suppress the sidelobes in the PSF should not be used. A more suitable apodization scheme would be a Gaussian annular ring.
Cyber-Physical Trade-Offs in Distributed Detection Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.
2010-01-01
We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less
Design and implementation of an optical Gaussian noise generator
NASA Astrophysics Data System (ADS)
Za~O, Leonardo; Loss, Gustavo; Coelho, Rosângela
2009-08-01
A design of a fast and accurate optical Gaussian noise generator is proposed and demonstrated. The noise sample generation is based on the Box-Muller algorithm. The functions implementation was performed on a high-speed Altera Stratix EP1S25 field-programmable gate array (FPGA) development kit. It enabled the generation of 150 million 16-bit noise samples per second. The Gaussian noise generator required only 7.4% of the FPGA logic elements, 1.2% of the RAM memory, 0.04% of the ROM memory, and a laser source. The optical pulses were generated by a laser source externally modulated by the data bit samples using the frequency-shift keying technique. The accuracy of the noise samples was evaluated for different sequences size and confidence intervals. The noise sample pattern was validated by the Bhattacharyya distance (Bd) and the autocorrelation function. The results showed that the proposed design of the optical Gaussian noise generator is very promising to evaluate the performance of optical communications channels with very low bit-error-rate values.
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
Laser Beam and Resonator Calculations on Desktop Computers.
NASA Astrophysics Data System (ADS)
Doumont, Jean-Luc
There is a continuing interest in the design and calculation of laser resonators and optical beam propagation. In particular, recently, interest has increased in developing concepts such as one-sided unstable resonators, supergaussian reflectivity profiles, diode laser modes, beam quality concepts, mode competition, excess noise factors, and nonlinear Kerr lenses. To meet these calculation needs, I developed a general-purpose software package named PARAXIA ^{rm TM}, aimed at providing optical scientists and engineers with a set of powerful design and analysis tools that provide rapid and accurate results and are extremely easy to use. PARAXIA can handle separable paraxial optical systems in cartesian or cylindrical coordinates, including complex-valued and misaligned ray matrices, with full diffraction effects between apertures. It includes the following programs:. ABCD provides complex-valued ray-matrix and gaussian -mode analyses for arbitrary paraxial resonators and optical systems, including astigmatism and misalignment in each element. This program required that I generalize the theory of gaussian beam propagation to the case of an off-axis gaussian beam propagating through a misaligned, complex -valued ray matrix. FRESNEL uses FFT and FHT methods to propagate an arbitrary wavefront through an arbitrary paraxial optical system using Huygens' integral in rectangular or radial coordinates. The wavefront can be multiplied by an arbitrary mirror profile and/or saturable gain sheet on each successive propagation through the system. I used FRESNEL to design a one-sided negative-branch unstable resonator for a free -electron laser, and to show how a variable internal aperture influences the mode competition and beam quality in a stable cavity. VSOURCE implements the virtual source analysis to calculate eigenvalues and eigenmodes for unstable resonators with both circular and rectangular hard-edged mirrors (including misaligned rectangular systems). I used VSOURCE to show the validity of the virtual source approach (by comparing its results to those of FRESNEL), to study the properties of hard-edged unstable resonators, and to obtain numerical values of the excess noise factors in such resonators. VRM carries out mode calculations for gaussian variable-reflectivity-mirror lasers. It implements complicated analytical results that I derived to point out the large numerical value of the excess noise factor in geometrically unstable resonators.
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot
2014-03-01
The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.
Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1999-01-01
Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.
Quantifying the non-Gaussianity in the EoR 21-cm signal through bispectrum
NASA Astrophysics Data System (ADS)
Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh; Watkinson, Catherine A.; Bharadwaj, Somnath; Mellema, Garrelt
2018-05-01
The epoch of reionization (EoR) 21-cm signal is expected to be highly non-Gaussian in nature and this non-Gaussianity is also expected to evolve with the progressing state of reionization. Therefore the signal will be correlated between different Fourier modes (k). The power spectrum will not be able capture this correlation in the signal. We use a higher order estimator - the bispectrum - to quantify this evolving non-Gaussianity. We study the bispectrum using an ensemble of simulated 21-cm signal and with a large variety of k triangles. We observe two competing sources driving the non-Gaussianity in the signal: fluctuations in the neutral fraction (x_{H I}) field and fluctuations in the matter density field. We find that the non-Gaussian contribution from these two sources varies, depending on the stage of reionization and on which k modes are being studied. We show that the sign of the bispectrum works as a unique marker to identify which among these two components is driving the non-Gaussianity. We propose that the sign change in the bispectrum, when plotted as a function of triangle configuration cos θ and at a certain stage of the EoR can be used as a confirmative test for the detection of the 21-cm signal. We also propose a new consolidated way to visualize the signal evolution (with evolving \\bar{x}_{H I} or redshift), through the trajectories of the signal in a power spectrum and equilateral bispectrum i.e. P(k) - B(k, k, k) space.
Effect of polarization on the evolution of electromagnetic hollow Gaussian Schell-model beam
NASA Astrophysics Data System (ADS)
Long, Xuewen; Lu, Keqing; Zhang, Yuhong; Guo, Jianbang; Li, Kehao
2011-02-01
Based on the theory of coherence, an analytical propagation formula for partially polarized and partially coherent hollow Gaussian Schell-model beams (HGSMBs) passing through a paraxial optical system is derived. Furthermore, we show that the degree of polarization of source may affect the evolution of HGSMBs and a tunable dark region may exist. For two special cases of fully coherent and partially coherent δxx = δyy, normalized intensity distributions are independent of the polarization of source.
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Anisotropic non-gaussianity from rotational symmetry breaking excited initial states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashoorioon, Amjad; Casadio, Roberto; Dipartimento di Fisica e Astronomia, Alma Mater Università di Bologna,via Irnerio 46, 40126 Bologna
2016-12-01
If the initial quantum state of the primordial perturbations broke rotational invariance, that would be seen as a statistical anisotropy in the angular correlations of the cosmic microwave background radiation (CMBR) temperature fluctuations. This can be described by a general parameterisation of the initial conditions that takes into account the possible direction-dependence of both the amplitude and the phase of particle creation during inflation. The leading effect in the CMBR two-point function is typically a quadrupole modulation, whose coefficient is analytically constrained here to be |B|≲0.06. The CMBR three-point function then acquires enhanced non-gaussianity, especially for the local configurations. Inmore » the large occupation number limit, a distinctive prediction is a modulation of the non-gaussianity around a mean value depending on the angle that short and long wavelength modes make with the preferred direction. The maximal variations with respect to the mean value occur for the configurations which are coplanar with the preferred direction and the amplitude of the non-gaussianity increases (decreases) for the short wavelength modes aligned with (perpendicular to) the preferred direction. For a high scale model of inflation with maximally pumped up isotropic occupation and ϵ≃0.01 the difference between these two configurations is about 0.27, which could be detectable in the future. For purely anisotropic particle creation, the non-Gaussianity can be larger and its anisotropic feature very sharp. The non-gaussianity can then reach f{sub NL}∼30 in the preferred direction while disappearing from the correlations in the orthogonal plane.« less
New more accurate calculations of the ground state potential energy surface of H(3) (+).
Pavanello, Michele; Tung, Wei-Cheng; Leonarski, Filip; Adamowicz, Ludwik
2009-02-21
Explicitly correlated Gaussian functions with floating centers have been employed to recalculate the ground state potential energy surface (PES) of the H(3) (+) ion with much higher accuracy than it was done before. The nonlinear parameters of the Gaussians (i.e., the exponents and the centers) have been variationally optimized with a procedure employing the analytical gradient of the energy with respect to these parameters. The basis sets for calculating new PES points were guessed from the points already calculated. This allowed us to considerably speed up the calculations and achieve very high accuracy of the results.
Fourier plane modeling of the jet in the galaxy M81
NASA Astrophysics Data System (ADS)
Ramessur, Arvind; Bietenholz, Michael F.; Leeuw, Lerothodi L.; Bartel, Norbert
2015-03-01
The nearby spiral galaxy M81 has a low-luminosity Active Galactic Nucleus in its center with a core and a one-sided curved jet, dubbed M81*, that is barely resolved with VLBI. To derive basic parameters such as the length of the jet, its orientation and curvature, the usual method of model-fitting with point sources and elliptical Gaussians may not always be the most appropriate one. We are developing Fourier-plane models for such sources, in particular an asymmetric triangle model to fit the extensive set of VLBI data of M81* in the u-v plane. This method may have an advantage over conventional ones in extracting information close to the resolution limit to provide us with a more comprehensive picture of the structure and evolution of the jet. We report on preliminary results.
Addendum to foundations of multidimensional wave field signal theory: Gaussian source function
NASA Astrophysics Data System (ADS)
Baddour, Natalie
2018-02-01
Many important physical phenomena are described by wave or diffusion-wave type equations. Recent work has shown that a transform domain signal description from linear system theory can give meaningful insight to multi-dimensional wave fields. In N. Baddour [AIP Adv. 1, 022120 (2011)], certain results were derived that are mathematically useful for the inversion of multi-dimensional Fourier transforms, but more importantly provide useful insight into how source functions are related to the resulting wave field. In this short addendum to that work, it is shown that these results can be applied with a Gaussian source function, which is often useful for modelling various physical phenomena.
NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.
Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2018-05-01
We examine the mode coupling in vortex beams. Mode coupling also known as the crosstalk takes place due to turbulent characteristics of the atmospheric communication medium. This way, the transmitted intrinsic mode of the vortex beam leaks power to other extrinsic modes, thus preventing the correct detection of the transmitted symbol which is usually encoded into the mode index or the orbital angular momentum state of the vortex beam. Here we investigate the normalized power mode coupling ratios of several types of vortex beams, namely, Gaussian vortex beam, Bessel Gaussian beam, hypergeometric Gaussian beam and Laguerre Gaussian beam. It is found that smaller mode numbers lead to less mode coupling. The same is partially observed for increasing source sizes. Comparing the vortex beams amongst themselves, it is seen that hypergeometric Gaussian beam is the one retaining the most power in intrinsic mode during propagation, but only at lowest mode index of unity. At higher mode indices this advantage passes over to the Gaussian vortex beam.
Activation rates for nonlinear stochastic flows driven by non-Gaussian noise
NASA Astrophysics Data System (ADS)
van den Broeck, C.; Hänggi, P.
1984-11-01
Activation rates are calculated for stochastic bistable flows driven by asymmetric dichotomic Markov noise (a two-state Markov process). This noise contains as limits both a particular type of non-Gaussian white shot noise and white Gaussian noise. Apart from investigating the role of colored noise on the escape rates, one can thus also study the influence of the non-Gaussian nature of the noise on these rates. The rate for white shot noise differs in leading order (Arrhenius factor) from the corresponding rate for white Gaussian noise of equal strength. In evaluating the rates we demonstrate the advantage of using transport theory over a mean first-passage time approach for cases with generally non-white and non-Gaussian noise sources. For white shot noise with exponentially distributed weights we succeed in evaluating the mean first-passage time of the corresponding integro-differential master-equation dynamics. The rate is shown to coincide in the weak noise limit with the inverse mean first-passage time.
Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367
NASA Technical Reports Server (NTRS)
Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.
1995-01-01
We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.
Renyi entropy measures of heart rate Gaussianity.
Lake, Douglas E
2006-01-01
Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.
NASA Astrophysics Data System (ADS)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; Lemon, Cameron A.; Auger, Matthew W.; Banerji, Manda; Hung, Johnathan M.; Koposov, Sergey E.; Lidman, Christopher E.; Reed, Sophie L.; Allam, Sahar; Benoit-Lévy, Aurélien; Bertin, Emmanuel; Brooks, David; Buckley-Geer, Elizabeth; Carnero Rosell, Aurelio; Carrasco Kind, Matias; Carretero, Jorge; Cunha, Carlos E.; da Costa, Luiz N.; Desai, Shantanu; Diehl, H. Thomas; Dietrich, Jörg P.; Evrard, August E.; Finley, David A.; Flaugher, Brenna; Fosalba, Pablo; Frieman, Josh; Gerdes, David W.; Goldstein, Daniel A.; Gruen, Daniel; Gruendl, Robert A.; Gutierrez, Gaston; Honscheid, Klaus; James, David J.; Kuehn, Kyler; Kuropatkin, Nikolay; Lima, Marcos; Lin, Huan; Maia, Marcio A. G.; Marshall, Jennifer L.; Martini, Paul; Melchior, Peter; Miquel, Ramon; Ogando, Ricardo; Plazas Malagón, Andrés; Reil, Kevin; Romer, Kathy; Sanchez, Eusebio; Santiago, Basilio; Scarpine, Vic; Sevilla-Noarbe, Ignacio; Soares-Santos, Marcelle; Sobreira, Flavia; Suchyta, Eric; Tarle, Gregory; Thomas, Daniel; Tucker, Douglas L.; Walker, Alistair R.
2017-03-01
We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift zs = 2.74 and image separation of 2.9 arcsec lensed by a foreground zl = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES), near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with IAB = 18.61 and IAB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θE ˜ 1.47 arcsec, enclosed mass Menc ˜ 4 × 1011 M⊙ and a time delay of ˜52 d. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.
Optimization of the transition path of the head hardening with using the genetic algorithms
NASA Astrophysics Data System (ADS)
Wróbel, Joanna; Kulawik, Adam
2016-06-01
An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.
Development and application of a reactive plume-in-grid model: evaluation over Greater Paris
NASA Astrophysics Data System (ADS)
Korsakissok, I.; Mallet, V.
2010-09-01
Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.
An application of information theory to stochastic classical gravitational fields
NASA Astrophysics Data System (ADS)
Angulo, J.; Angulo, J. C.; Angulo, J. M.
2018-06-01
The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.
Covariances and spectra of the kinematics and dynamics of nonlinear waves
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.
1985-01-01
Using the Stokes waves as a model of nonlinear waves and considering the linear component as a narrow-band Gaussian process, the covariances and spectra of velocity and acceleration components and pressure for points in the vicinity of still water level were derived taking into consideration the effects of free surface fluctuations. The results are compared with those obtained earlier using linear Gaussian waves.
NASA Astrophysics Data System (ADS)
Ueno, Tetsuro; Hino, Hideitsu; Hashimoto, Ai; Takeichi, Yasuo; Sawada, Masahiro; Ono, Kanta
2018-01-01
Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.
Dynamical Crossovers in Prethermal Critical States.
Chiocchetta, Alessio; Gambassi, Andrea; Diehl, Sebastian; Marino, Jamir
2017-03-31
We study the prethermal dynamics of an interacting quantum field theory with an N-component order parameter and O(N) symmetry, suddenly quenched in the vicinity of a dynamical critical point. Depending on the initial conditions, the evolution of the order parameter, and of the response and correlation functions, can exhibit a temporal crossover between universal dynamical scaling regimes governed, respectively, by a quantum and a classical prethermal fixed point, as well as a crossover from a Gaussian to a non-Gaussian prethermal dynamical scaling. Together with a recent experiment, this suggests that quenches may be used in order to explore the rich variety of dynamical critical points occurring in the nonequilibrium dynamics of a quantum many-body system. We illustrate this fact by using a combination of renormalization group techniques and a nonperturbative large-N limit.
Comment on "Universal relation between skewness and kurtosis in complex dynamics"
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2015-12-01
In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.
Non-gaussianity versus nonlinearity of cosmological perturbations.
Verde, L
2001-06-01
Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
On the application of Rice's exceedance statistics to atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Chen, W. Y.
1972-01-01
Discrepancies produced by the application of Rice's exceedance statistics to atmospheric turbulence are examined. First- and second-order densities from several data sources have been measured for this purpose. Particular care was paid to each selection of turbulence that provides stationary mean and variance over the entire segment. Results show that even for a stationary segment of turbulence, the process is still highly non-Gaussian, in spite of a Gaussian appearance for its first-order distribution. Data also indicate strongly non-Gaussian second-order distributions. It is therefore concluded that even stationary atmospheric turbulence with a normal first-order distribution cannot be considered a Gaussian process, and consequently the application of Rice's exceedance statistics should be approached with caution.
Modeling tidal exchange and dispersion in Boston Harbor
Signell, Richard P.; Butman, Bradford
1992-01-01
Tidal dispersion and the horizontal exchange of water between Boston Harbor and the surrounding ocean are examined with a high-resolution (200 m) depth-averaged numerical model. The strongly varying bathymetry and coastline geometry of the harbor generate complex spatial patterns in the modeled tidal currents which are verified by shipboard acoustic Doppler surveys. Lagrangian exchange experiments demonstrate that tidal currents rapidly exchange and mix material near the inlets of the harbor due to asymmetry in the ebb/flood response. This tidal mixing zone extends roughly a tidal excursion from the inlets and plays an important role in the overall flushing of the harbor. Because the tides can only efficiently mix material in this limited region, however, harbor flushing must be considered a two step process: rapid exchange in the tidal mixing zone, followed by flushing of the tidal mixing zone by nontidal residual currents. Estimates of embayment flushing based on tidal calculations alone therefore can significantly overestimate the flushing time that would be expected under typical environmental conditions. Particle-release simulations from point sources also demonstrate that while the tides efficiently exchange material in the vicinity of the inlets, the exact nature of dispersion from point sources is extremely sensitive to the timing and location of the release, and the distribution of particles is streaky and patchlike. This suggests that high-resolution modeling of dispersion from point sources in these regions must be performed explicitly and cannot be parameterized as a plume with Gaussian-spreading in a larger scale flow field.
On the cause of the non-Gaussian distribution of residuals in geomagnetism
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2017-12-01
To describe errors in the data, Gaussian distributions naturally come to mind. In many practical instances, indeed, Gaussian distributions are appropriate. In the broad field of geomagnetism, however, it has repeatedly been noted that residuals between data and models often display much sharper distributions, sometimes better described by a Laplace distribution. In the present study, we make the case that such non-Gaussian behaviors are very likely the result of what is known as mixture of distributions in the statistical literature. Mixtures arise as soon as the data do not follow a common distribution or are not properly normalized, the resulting global distribution being a mix of the various distributions followed by subsets of the data, or even individual datum. We provide examples of the way such mixtures can lead to distributions that are much sharper than Gaussian distributions and discuss the reasons why such mixtures are likely the cause of the non-Gaussian distributions observed in geomagnetism. We also show that when properly selecting sub-datasets based on geophysical criteria, statistical mixture can sometimes be avoided and much more Gaussian behaviors recovered. We conclude with some general recommendations and point out that although statistical mixture always tends to sharpen the resulting distribution, it does not necessarily lead to a Laplacian distribution. This needs to be taken into account when dealing with such non-Gaussian distributions.
Polarization singularity indices in Gaussian laser beams
NASA Astrophysics Data System (ADS)
Freund, Isaac
2002-01-01
Two types of point singularities in the polarization of a paraxial Gaussian laser beam are discussed in detail. V-points, which are vector point singularities where the direction of the electric vector of a linearly polarized field becomes undefined, and C-points, which are elliptic point singularities where the ellipse orientations of elliptically polarized fields become undefined. Conventionally, V-points are characterized by the conserved integer valued Poincaré-Hopf index η, with generic value η=±1, while C-points are characterized by the conserved half-integer singularity index IC, with generic value IC=±1/2. Simple algorithms are given for generating V-points with arbitrary positive or negative integer indices, including zero, at arbitrary locations, and C-points with arbitrary positive or negative half-integer or integer indices, including zero, at arbitrary locations. Algorithms are also given for generating continuous lines of these singularities in the plane, V-lines and C-lines. V-points and C-points may be transformed one into another. A topological index based on directly measurable Stokes parameters is used to discuss this transformation. The evolution under propagation of V-points and C-points initially embedded in the beam waist is studied, as is the evolution of V-dipoles and C-dipoles.
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
NASA Astrophysics Data System (ADS)
Kim, Y.; Seigneur, C.; Duclaux, O.
2014-04-01
Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosol (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (beyond about 15 km). The impacts of PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model with a coarser grid size.
NASA Astrophysics Data System (ADS)
Kim, Y.; Seigneur, C.; Duclaux, O.
2013-11-01
Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosols (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (from about 15 km). The impacts of the PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H
2018-01-01
Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.
Probing noise in flux qubits via macroscopic resonant tunneling.
Harris, R; Johnson, M W; Han, S; Berkley, A J; Johansson, J; Bunyk, P; Ladizinsky, E; Govorkov, S; Thom, M C; Uchaikin, S; Bumble, B; Fung, A; Kaul, A; Kleinsasser, A; Amin, M H S; Averin, D V
2008-09-12
Macroscopic resonant tunneling between the two lowest lying states of a bistable rf SQUID is used to characterize noise in a flux qubit. Measurements of the incoherent decay rate as a function of flux bias revealed a Gaussian-shaped profile that is not peaked at the resonance point but is shifted to a bias at which the initial well is higher than the target well. The rms amplitude of the noise, which is proportional to the dephasing rate 1/tauphi, was observed to be weakly dependent on temperature below 70 mK. Analysis of these results indicates that the dominant source of low energy flux noise in this device is a quantum mechanical environment in thermal equilibrium.
Spatiotemporal dynamics of Gaussian laser pulse in a multi ions plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafari Milani, M. R., E-mail: mrj.milani@gmail.com
Spatiotemporal evolutions of Gaussian laser pulse propagating through a plasma with multiple charged ions are studied, taking into account the ponderomotive nonlinearity. Coupled differential equations for beam width and pulse length parameters are established and numerically solved using paraxial ray approximation. In one-dimensional geometry, effects of laser and plasma parameters such as laser intensity, plasma density, and temperature on the longitudinal pulse compression and the laser intensity distribution are analyzed for plasmas with singly and doubly charged ions. The results demonstrate that self-compression occurs in a laser intensity range with a turning point intensity in which the self-compression process hasmore » its strongest extent. The results also show that the multiply ionized ions have different effect on the pulse compression above and below turning point intensity. Finally, three-dimensional geometry is used to analyze the simultaneous evolution of both self-focusing and self-compression of Gaussian laser pulse in such plasmas.« less
An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-01-01
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197
An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-08-21
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Rational-operator-based depth-from-defocus approach to scene reconstruction.
Li, Ang; Staunton, Richard; Tjahjadi, Tardi
2013-09-01
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
DC and analog/RF performance optimisation of source pocket dual work function TFET
NASA Astrophysics Data System (ADS)
Raad, Bhagwan Ram; Sharma, Dheeraj; Kondekar, Pravin; Nigam, Kaushal; Baronia, Sagar
2017-12-01
We investigate a systematic study of source pocket tunnel field-effect transistor (SP TFET) with dual work function of single gate material by using uniform and Gaussian doping profile in the drain region for ultra-low power high frequency high speed applications. For this, a n+ doped region is created near the source/channel junction to decrease the depletion width results in improvement of ON-state current. However, the dual work function of the double gate is used for enhancement of the device performance in terms of DC and analog/RF parameters. Further, to improve the high frequency performance of the device, Gaussian doping profile is considered in the drain region with different characteristic lengths which decreases the gate to drain capacitance and leads to drastic improvement in analog/RF figures of merit. Furthermore, the optimisation is performed with different concentrations for uniform and Gaussian drain doping profile and for various sectional length of lower work function of the gate electrode. Finally, the effect of temperature variation on the device performance is demonstrated.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Gaussian windows: A tool for exploring multivariate data
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1990-01-01
Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.
Gaussian black holes in Rastall gravity
NASA Astrophysics Data System (ADS)
Spallucci, Euro; Smailagic, Anais
In this short note we present the solution of Rastall gravity equations sourced by a Gaussian matter distribution. We find that the black hole metric shares all the common features of other regular, General Relativity BH solutions discussed in the literature: there is no curvature singularity and the Hawking radiation leaves a remnant at zero temperature in the form of a massive ordinary particle.
NASA Astrophysics Data System (ADS)
Aghandeh, Hadi; Sedigh Ziabari, Seyed Ali
2017-11-01
This study investigates a junctionless tunnel field-effect transistor with a dual material gate and a heterostructure channel/source interface (DMG-H-JLTFET). We find that using the heterostructure interface improves device behavior by reducing the tunneling barrier width at the channel/source interface. Simultaneously, the dual material gate structure decreases ambipolar current by increasing the tunneling barrier width at the drain/channel interface. The performance of the device is analyzed based on the energy band diagram at on, off, and ambipolar states. Numerical simulations demonstrate improvements in ION, IOFF, ION/IOFF, subthreshold slope (SS), transconductance and cut-off frequency and suppressed ambipolar behavior. Next, the workfunction optimization of dual material gate is studied. It is found that if appropriate workfunctions are selected for tunnel and auxiliary gates, the JLTFET exhibits considerably improved performance. We then study the influence of Gaussian doping distribution at the drain and the channel on the ambipolar performance of the device and find that a Gaussian doping profile and a dual material gate structure remarkably reduce ambipolar current. Gaussian doped DMG-H-JLTFET, also exhibits enhanced IOFF, ION/IOFF, SS and a low threshold voltage without degrading IOFF.
Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.
2018-07-01
Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Spin Hall effect originated from fractal surface
NASA Astrophysics Data System (ADS)
Hajzadeh, I.; Mohseni, S. M.; Movahed, S. M. S.; Jafari, G. R.
2018-05-01
The spin Hall effect (SHE) has shown promising impact in the field of spintronics and magnonics from fundamental and practical points of view. This effect originates from several mechanisms of spin scatterers based on spin–orbit coupling (SOC) and also can be manipulated through the surface roughness. Here, the effect of correlated surface roughness on the SHE in metallic thin films with small SOC is investigated theoretically. Toward this, the self-affine fractal surface in the framework of the Born approximation is exploited. The surface roughness is described by the k-correlation model and is characterized by the roughness exponent H , the in-plane correlation length ξ and the rms roughness amplitude δ. It is found that the spin Hall angle in metallic thin film increases by two orders of magnitude when H decreases from H = 1 to H = 0. In addition, the source of SHE for surface roughness with Gaussian profile distribution function is found to be mainly the side jump scattering while that with a non-Gaussian profile suggests both of the side jump and skew scatterings are present. Our achievements address how details of the surface roughness profile can adjust the SHE in non-heavy metals.
Restoration for Noise Removal in Quantum Images
NASA Astrophysics Data System (ADS)
Liu, Kai; Zhang, Yi; Lu, Kai; Wang, Xiaoping
2017-09-01
Quantum computation has become increasingly attractive in the past few decades due to its extraordinary performance. As a result, some studies focusing on image representation and processing via quantum mechanics have been done. However, few of them have considered the quantum operations for images restoration. To address this problem, three noise removal algorithms are proposed in this paper based on the novel enhanced quantum representation model, oriented to two kinds of noise pollution (Salt-and-Pepper noise and Gaussian noise). For the first algorithm Q-Mean, it is designed to remove the Salt-and-Pepper noise. The noise points are extracted through comparisons with the adjacent pixel values, after which the restoration operation is finished by mean filtering. As for the second method Q-Gauss, a special mask is applied to weaken the Gaussian noise pollution. The third algorithm Q-Adapt is effective for the source image containing unknown noise. The type of noise can be judged through the quantum statistic operations for the color value of the whole image, and then different noise removal algorithms are used to conduct image restoration respectively. Performance analysis reveals that our methods can offer high restoration quality and achieve significant speedup through inherent parallelism of quantum computation.
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Radius of curvature variations for annular, dark hollow and flat topped beams in turbulence
NASA Astrophysics Data System (ADS)
Eyyuboğlu, H. T.; Baykal, Y. K.; Ji, X. L.
2010-06-01
For propagation in turbulent atmosphere, the radius of curvature variations for annular, dark hollow and flat topped beams are examined under a single formulation. Our results show that for collimated beams, when examined against propagation length, the dark hollow, flat topped and annular Gaussian beams behave nearly the same as the Gaussian beam, but have larger radius of curvature values. Increased partial coherence and turbulence levels tend to lower the radius of curvature. Bigger source sizes on the other hand give rise to larger radius of curvature. Dark hollow and flat topped beams have reduced radius of curvature at longer wavelengths, whereas the annular Gaussian beam seems to be unaffected by wavelength changes; the radius of curvature of the Gaussian beam meanwhile rises with increasing wavelength.
Gyrator transform of Gaussian beams with phase difference and generation of hollow beam
NASA Astrophysics Data System (ADS)
Xiao, Zhiyu; Xia, Hui; Yu, Tao; Xie, Ding; Xie, Wenke
2018-03-01
The optical expression of Gaussian beams with phase difference, which is caused by gyrator transform (GT), has been obtained. The intensity and phase distribution of transform Gaussian beams are analyzed. It is found that the circular hollow vortex beam can be obtained by overlapping two GT Gaussian beams with π phase difference. The effect of parameters on the intensity and phase distributions of the hollow vortex beam are discussed. The results show that the shape of intensity distribution is significantly influenced by GT angle α and propagation distance z. The size of the hollow vortex beam can be adjusted by waist width ω 0. Compared with previously reported results, the work shows that the hollow vortex beam can be obtained without any model conversion of the light source.
Gyrator transform of Gaussian beams with phase difference and generation of hollow beam
NASA Astrophysics Data System (ADS)
Xiao, Zhiyu; Xia, Hui; Yu, Tao; Xie, Ding; Xie, Wenke
2018-06-01
The optical expression of Gaussian beams with phase difference, which is caused by gyrator transform (GT), has been obtained. The intensity and phase distribution of transform Gaussian beams are analyzed. It is found that the circular hollow vortex beam can be obtained by overlapping two GT Gaussian beams with π phase difference. The effect of parameters on the intensity and phase distributions of the hollow vortex beam are discussed. The results show that the shape of intensity distribution is significantly influenced by GT angle α and propagation distance z. The size of the hollow vortex beam can be adjusted by waist width ω 0. Compared with previously reported results, the work shows that the hollow vortex beam can be obtained without any model conversion of the light source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sen; Luo, Sheng-Nian
Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10–100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are exploredviaGaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamentalmore » harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.« less
Vortex dynamics and Lagrangian statistics in a model for active turbulence.
James, Martin; Wilczek, Michael
2018-02-14
Cellular suspensions such as dense bacterial flows exhibit a turbulence-like phase under certain conditions. We study this phenomenon of "active turbulence" statistically by using numerical tools. Following Wensink et al. (Proc. Natl. Acad. Sci. U.S.A. 109, 14308 (2012)), we model active turbulence by means of a generalized Navier-Stokes equation. Two-point velocity statistics of active turbulence, both in the Eulerian and the Lagrangian frame, is explored. We characterize the scale-dependent features of two-point statistics in this system. Furthermore, we extend this statistical study with measurements of vortex dynamics in this system. Our observations suggest that the large-scale statistics of active turbulence is close to Gaussian with sub-Gaussian tails.
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
Annular wave packets at Dirac points in graphene and their probability-density oscillation.
Luo, Ji; Valencia, Daniel; Lu, Junqiang
2011-12-14
Wave packets in graphene whose central wave vector is at Dirac points are investigated by numerical calculations. Starting from an initial Gaussian function, these wave packets form into annular peaks that propagate to all directions like ripple-rings on water surface. At the beginning, electronic probability alternates between the central peak and the ripple-rings and transient oscillation occurs at the center. As time increases, the ripple-rings propagate at the fixed Fermi speed, and their widths remain unchanged. The axial symmetry of the energy dispersion leads to the circular symmetry of the wave packets. The fixed speed and widths, however, are attributed to the linearity of the energy dispersion. Interference between states that, respectively, belong to two branches of the energy dispersion leads to multiple ripple-rings and the probability-density oscillation. In a magnetic field, annular wave packets become confined and no longer propagate to infinity. If the initial Gaussian width differs greatly from the magnetic length, expanding and shrinking ripple-rings form and disappear alternatively in a limited spread, and the wave packet resumes the Gaussian form frequently. The probability thus oscillates persistently between the central peak and the ripple-rings. If the initial Gaussian width is close to the magnetic length, the wave packet retains the Gaussian form and its height and width oscillate with a period determined by the first Landau energy. The wave-packet evolution is determined jointly by the initial state and the magnetic field, through the electronic structure of graphene in a magnetic field. © 2011 American Institute of Physics
2013-03-01
time (milliseconds) GFlops Comparison to GPU peak performance (%) Cascade Gaussian Filtering 13 45.19 6.3 Difference of Gaussian 0.512 152...values for the GPU-targeted actor implementations in terms of Giga Floating Point Operations Per Second ( GFLOPS ). Our GFLOPS calculation for an actor...kernels. The results for GFLOPS are provided in Table . The actors were implemented on an NVIDIA GTX260 GPU, which provides 715 GFLOPS as peak
Theoretical investigation of gas-surface interactions
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1990-01-01
A Dirac-Hartree-Fock code was developed for polyatomic molecules. The program uses integrals over symmetry-adapted real spherical harmonic Gaussian basis functions generated by a modification of the MOLECULE integrals program. A single Gaussian function is used for the nuclear charge distribution, to ensure proper boundary conditions at the nuclei. The Gaussian primitive functions are chosen to satisfy the kinetic balance condition. However, contracted functions which do not necessarily satisfy this condition may be used. The Fock matrix is constructed in the scalar basis and transformed to a jj-coupled 2-spinor basis before diagonalization. The program was tested against numerical results for atoms with a Gaussian nucleus and diatomic molecules with point nuclei. The energies converge on the numerical values as the basis set size is increased. Full use of molecular symmetry (restricted to D sub 2h and subgroups) is yet to be implemented.
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; ...
2016-11-17
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
High Precision Edge Detection Algorithm for Mechanical Parts
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
NASA Astrophysics Data System (ADS)
Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer
2017-06-01
In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Non-Gaussian microwave background fluctuations from nonlinear gravitational effects
NASA Technical Reports Server (NTRS)
Salopek, D. S.; Kunstatter, G. (Editor)
1991-01-01
Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
Towards an Optimal Interest Point Detector for Measurements in Ultrasound Images
NASA Astrophysics Data System (ADS)
Zukal, Martin; Beneš, Radek; Číka, Petr; Říha, Kamil
2013-12-01
This paper focuses on the comparison of different interest point detectors and their utilization for measurements in ultrasound (US) images. Certain medical examinations are based on speckle tracking which strongly relies on features that can be reliably tracked frame to frame. Only significant features (interest points) resistant to noise and brightness changes within US images are suitable for accurate long-lasting tracking. We compare three interest point detectors - Harris-Laplace, Difference of Gaussian (DoG) and Fast Hessian - and identify the most suitable one for use in US images on the basis of an objective criterion. Repeatability rate is assumed to be an objective quality measure for comparison. We have measured repeatability in images corrupted by different types of noise (speckle noise, Gaussian noise) and for changes in brightness. The Harris-Laplace detector outperformed its competitors and seems to be a sound option when choosing a suitable interest point detector for US images. However, it has to be noted that Fast Hessian and DoG detectors achieved better results in terms of processing speed.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zavalin, Andre; Yang, Junhai; Haase, Andreas; Holle, Armin; Caprioli, Richard
2014-06-01
We have investigated the use of a Gaussian beam laser for MALDI Imaging Mass Spectrometry to provide a precisely defined laser spot of 5 μm diameter on target using a commercial MALDI TOF instrument originally designed to produce a 20 μm diameter laser beam spot at its smallest setting. A Gaussian beam laser was installed in the instrument in combination with an aspheric focusing lens. This ion source produced sharp ion images at 5 μm spatial resolution with signals of high intensity as shown for images from thin tissue sections of mouse brain.
Zavalin, Andre; Yang, Junhai; Haase, Andreas; Holle, Armin; Caprioli, Richard
2014-06-01
We have investigated the use of a Gaussian beam laser for MALDI Imaging Mass Spectrometry to provide a precisely defined laser spot of 5 μm diameter on target using a commercial MALDI TOF instrument originally designed to produce a 20 μm diameter laser beam spot at its smallest setting. A Gaussian beam laser was installed in the instrument in combination with an aspheric focusing lens. This ion source produced sharp ion images at 5 μm spatial resolution with signals of high intensity as shown for images from thin tissue sections of mouse brain.
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand
2010-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand
2009-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruixing; Yang, LV; Xu, Kele
Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less
NASA Astrophysics Data System (ADS)
Ehret, G.; Amediek, A.; Wirth, M.; Fix, A.; Kiemle, C.; Quatrevalet, M.
2016-12-01
We report on a new method and on the first demonstration to quantify emission rates from strong greenhouse gas (GHG) point sources using airborne Integrated Path Differential Absorption (IPDA) Lidar measurements. In order to build trust in the self-reported emission rates by countries, verification against independent monitoring systems is a prerequisite to check the reported budget. A significant fraction of the total anthropogenic emission of CO2 and CH4 originates from localized strong point sources of large energy production sites or landfills. Both are not monitored with sufficiently accuracy by the current observation system. There is a debate whether airborne remote sensing could fill in the gap to infer those emission rates from budgeting or from Gaussian plume inversion approaches, whereby measurements of the GHG column abundance beneath the aircraft can be used to constrain inverse models. In contrast to passive sensors, the use of an active instrument like CHARM-F for such emission verification measurements is new. CHARM-F is a new airborne IPDA-Lidar devised for the German research aircraft HALO for the simultaneous measurement of the column-integrated dry-air mixing ratio of CO2 and CH4 commonly denoted as XCO2 und XCH4, respectively. It has successfully been tested in a serious of flights over Central Europe to assess its performance under various reflectivity conditions and in a strongly varying topography like the Alps. The analysis of a methane plume measured in crosswind direction of a coal mine ventilation shaft revealed an instantaneous emission rate of 9.9 ± 1.7 kt CH4 yr-1. We discuss the methodology of our point source estimation approach and give an outlook on the CoMet field experiment scheduled in 2017 for the measurement of anthropogenic and natural GHG emissions by a combination of active and passive remote sensing instruments on research aircraft.
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
NASA Astrophysics Data System (ADS)
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
A Gaussian Approximation Potential for Silicon
NASA Astrophysics Data System (ADS)
Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor
We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.
Manzhos, Sergei; Carrington, Tucker
2016-12-14
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H 2 CO, we obtain a mean absolute error of less than 1 cm -1 ; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm -1 .
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Carrington, Tucker
2016-12-01
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H2CO, we obtain a mean absolute error of less than 1 cm-1; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm-1.
Analysis of non-Gaussian laser mode guidance and evolution in leaky plasma channels
NASA Astrophysics Data System (ADS)
Djordjevic, Blagoje; Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim
2016-10-01
The evolution and propagation of a non-Gaussian laser pulse under varying circumstances, including a typical matched parabolic channel as well as leaky channels, are investigated. It has previously been shown for a Gaussian pulse that matched guiding can be achieved using parabolic plasma channels. In the low power regime, it can be shown directly that for multi-mode pulses there is significant transverse beating. Given the adverse behavior of non-Gaussian pulses in traditional guiding designs, we examine the use of leaky channels to filter out higher modes as a means of optimizing laser conditions. The interaction between different modes can have an adverse effect on the laser pulse as it propagates through the primary channel. To improve guiding of the pulse we propose using leaky channels. Realistic plasma channel profiles are considered. Higher order mode content is lost through the leaky channel, while the fundamental mode remains well-guided. This is demonstrated using both numerical simulations as well as the source-dependent Laguerre-Gaussian modal expansion. In conclusion, an idealized plasma lens based on leaky channels is found to filter out the higher order modes and leave a near-Gaussian profile before the pulse enters the primary channel.
Coarse Point Cloud Registration by Egi Matching of Voxel Clusters
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo
2016-06-01
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
An X-Ray Counterpart of HESS J1427-608 Discovered with Suzaku
NASA Astrophysics Data System (ADS)
Fujinaga, Takahisa; Mori, Koji; Bamba, Aya; Kimura, Shoichi; Dotani, Tadayasu; Ozaki, Masanobu; Matsuta, Keiko; Pülhofer, Gerd; Uchiyama, Hideki; Hiraga, Junko S.; Matsumoto, Hironori; Terada, Yukikatsu
2013-06-01
We report on the discovery of an X-ray counterpart of the unidentified very high-energy gamma-ray source HESS J1427-608. In the sky field coincident with HESS J1427-608, an extended source was found in the 2-8 keV band, and was designated as Suzaku J1427-6051. Its X-ray radial profile has an extension of σ = 0.'9 ± 0.'1 if approximated by a Gaussian. The spectrum was well fitted by an absorbed power-law with NH = (1.1 ± 0.3) × 1023 cm-2, Γ = 3.1+0.6-0.5, and the unabsorbed flux FX = (9+4-2) × 10-13 erg s-1 cm-2 in the 2-10 keV band. Using XMM-Newton archive data, we found seven point sources in the Suzaku source region. However, because their total flux and absorbing column densities are more than an order of magnitude lower than those of Suzaku J1427-6051, we consider that they are unrelated to the Suzaku source. Thus, Suzaku J1427-6051 is considered to be a truly diffuse source and an X-ray counterpart of HESS J1427-608. The possible nature of HESS J1427-608 is discussed based on the observational properties.
Non-Gaussian and Multivariate Noise Models for Signal Detection.
1982-09-01
follow, some of the basic results of asymptotic "theory are presented. both to make the notation clear. and to give some i ~ background for the...densities are considered within a detection framework. The discussions include specific examples and also some general methods of density generation ...densities generated by a memoryless, nonlinear transformation of a correlated, Gaussian source is discussed in some detail. A member of this class has the
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
NASA Astrophysics Data System (ADS)
Loye, A.; Jaboyedoff, M.; Pedrazzini, A.
2009-10-01
The availability of high resolution Digital Elevation Models (DEM) at a regional scale enables the analysis of topography with high levels of detail. Hence, a DEM-based geomorphometric approach becomes more accurate for detecting potential rockfall sources. Potential rockfall source areas are identified according to the slope angle distribution deduced from high resolution DEM crossed with other information extracted from geological and topographic maps in GIS format. The slope angle distribution can be decomposed in several Gaussian distributions that can be considered as characteristic of morphological units: rock cliffs, steep slopes, footslopes and plains. A terrain is considered as potential rockfall sources when their slope angles lie over an angle threshold, which is defined where the Gaussian distribution of the morphological unit "Rock cliffs" become dominant over the one of "Steep slopes". In addition to this analysis, the cliff outcrops indicated by the topographic maps were added. They contain however "flat areas", so that only the slope angles values above the mode of the Gaussian distribution of the morphological unit "Steep slopes" were considered. An application of this method is presented over the entire Canton of Vaud (3200 km2), Switzerland. The results were compared with rockfall sources observed on the field and orthophotos analysis in order to validate the method. Finally, the influence of the cell size of the DEM is inspected by applying the methodology over six different DEM resolutions.
NASA Astrophysics Data System (ADS)
Krings, Thomas; Neininger, Bruno; Gerilowski, Konstantin; Krautwurst, Sven; Buchwitz, Michael; Burrows, John P.; Lindemann, Carsten; Ruhtz, Thomas; Schüttemeyer, Dirk; Bovensmann, Heinrich
2018-02-01
Reliable techniques to infer greenhouse gas emission rates from localised sources require accurate measurement and inversion approaches. In this study airborne remote sensing observations of CO2 by the MAMAP instrument and airborne in situ measurements are used to infer emission estimates of carbon dioxide released from a cluster of coal-fired power plants. The study area is complex due to sources being located in close proximity and overlapping associated carbon dioxide plumes. For the analysis of in situ data, a mass balance approach is described and applied, whereas for the remote sensing observations an inverse Gaussian plume model is used in addition to a mass balance technique. A comparison between methods shows that results for all methods agree within 10 % or better with uncertainties of 10 to 30 % for cases in which in situ measurements were made for the complete vertical plume extent. The computed emissions for individual power plants are in agreement with results derived from emission factors and energy production data for the time of the overflight.
NASA Technical Reports Server (NTRS)
Blais, R. N.; Copeland, G. E.; Lerner, T. H.
1975-01-01
A technique for measuring smoke plume of large industrial sources observed by satellite using LARSYS is proposed. A Gaussian plume model is described, integrated in the vertical, and inverted to yield a form for the lateral diffusion coefficient, Ky. Given u, wind speed; y sub l, the horizontal distance of a line of constant brightness from the plume symmetry axis a distance x sub l, downstream from reference point at x=x sub 2, y=0, then K sub y = u ((y sub 1) to the 2nd power)/2 x sub 1 1n (x sub 2/x sub 1). The technique is applied to a plume from a power plant at Chester, Virginia, imaged August 31, 1973 by LANDSAT I. The plume bends slightly to the left 4.3 km from the source and estimates yield Ky of 28 sq m/sec near the source, and 19 sq m/sec beyond the bend. Maximum ground concentrations are estimated between 32 and 64 ug/cu m. Existing meteorological data would not explain such concentrations.
Development and application of a reactive plume-in-grid model: evaluation over Greater Paris
NASA Astrophysics Data System (ADS)
Korsakissok, I.; Mallet, V.
2010-02-01
Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations at measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment, with a decrease in RMSE by up to about -17% for SO2 and -7% for NO at measurement stations. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Reactive species are mostly sensitive to the local-scale parameters, such as the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.
Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.
Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng
2013-11-20
Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.
Gaussian quadrature for multiple orthogonal polynomials
NASA Astrophysics Data System (ADS)
Coussement, Jonathan; van Assche, Walter
2005-06-01
We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.
Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise
NASA Technical Reports Server (NTRS)
Kvalseth, T. O.
1977-01-01
This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.
The Herschel-ATLAS: magnifications and physical sizes of 500-μm-selected strongly lensed galaxies
NASA Astrophysics Data System (ADS)
Enia, A.; Negrello, M.; Gurwell, M.; Dye, S.; Rodighiero, G.; Massardi, M.; De Zotti, G.; Franceschini, A.; Cooray, A.; van der Werf, P.; Birkinshaw, M.; Michałowski, M. J.; Oteo, I.
2018-04-01
We perform lens modelling and source reconstruction of Sub-millimetre Array (SMA) data for a sample of 12 strongly lensed galaxies selected at 500μm in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). A previous analysis of the same data set used a single Sérsic profile to model the light distribution of each background galaxy. Here we model the source brightness distribution with an adaptive pixel scale scheme, extended to work in the Fourier visibility space of interferometry. We also present new SMA observations for seven other candidate lensed galaxies from the H-ATLAS sample. Our derived lens model parameters are in general consistent with previous findings. However, our estimated magnification factors, ranging from 3 to 10, are lower. The discrepancies are observed in particular where the reconstructed source hints at the presence of multiple knots of emission. We define an effective radius of the reconstructed sources based on the area in the source plane where emission is detected above 5σ. We also fit the reconstructed source surface brightness with an elliptical Gaussian model. We derive a median value reff ˜ 1.77 kpc and a median Gaussian full width at half-maximum ˜1.47 kpc. After correction for magnification, our sources have intrinsic star formation rates (SFR) ˜ 900-3500 M⊙ yr-1, resulting in a median SFR surface density ΣSFR ˜ 132 M⊙ yr-1 kpc-2 (or ˜218 M⊙ yr-1 kpc-2 for the Gaussian fit). This is consistent with that observed for other star-forming galaxies at similar redshifts, and is significantly below the Eddington limit for a radiation pressure regulated starburst.
Anatomy of an experimental two-link flexible manipulator under end-point control
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Cannon, Robert H., Jr.
1990-01-01
The design and experimental implementation of an end-point controller for two-link flexible manipulators are presented. The end-point controller is based on linear quadratic Gaussian (LQG) theory and is shown to exhibit significant improvements in trajectory tracking over a conventional controller design. To understand the behavior of the manipulator structure under end-point control, a strobe sequence illustrating the link deflections during a typical slew maneuver is included.
CENTAURUS A AS A POINT SOURCE OF ULTRAHIGH ENERGY COSMIC RAYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae, E-mail: hbkim@hanyang.ac.kr
We probe the possibility that Centaurus A (Cen A) is a point source of ultrahigh energy cosmic rays (UHECRs) observed by Pierre Auger Observatory (PAO), through the statistical analysis of the arrival direction distribution. For this purpose, we set up the Cen A dominance model for the UHECR sources, in which Cen A contributes the fraction f {sub C} of the whole UHECR with energy above 5.5 Multiplication-Sign 10{sup 19} eV and the isotropic background contributes the remaining 1 - f {sub C} fraction. The effect of the intergalactic magnetic fields on the bending of the trajectory of Cen Amore » originated UHECRs is parameterized by the Gaussian smearing angle {theta} {sub s}. For the statistical analysis, we adopted the correlational angular distance distribution (CADD) for the reduction of the arrival direction distribution and the Kuiper test to compare the observed and the expected CADDs. We identify the excess of UHECRs in the Cen A direction and fit the CADD of the observed PAO data by varying two parameters f {sub C} and {theta} {sub s} of the Cen A dominance model. The best-fit parameter values are f {sub C} Almost-Equal-To 0.1 (the corresponding Cen A fraction observed at PAO is f {sub C,PAO} Almost-Equal-To 0.15, that is, about 10 out of 69 UHECRs) and {theta} {sub s} = 5 Degree-Sign with the maximum likelihood L {sub max} = 0.29. This result supports the existence of a point source smeared by the intergalactic magnetic fields in the direction of Cen A. If Cen A is actually the source responsible for the observed excess of UHECRs, the rms deflection angle of the excess UHECRs implies the order of 10 nG intergalactic magnetic field in the vicinity of Cen A.« less
The robustness of truncated Airy beam in PT Gaussian potentials media
NASA Astrophysics Data System (ADS)
Wang, Xianni; Fu, Xiquan; Huang, Xianwei; Yang, Yijun; Bai, Yanfeng
2018-03-01
The robustness of truncated Airy beam in parity-time (PT) symmetric Gaussian potentials media is numerically investigated. A high-peak power beam sheds from the Airy beam due to the media modulation while the Airy wavefront still retain its self-bending and non-diffraction characteristics under the influence of modulation parameters. Increasing the modulation factor results in the smaller value of maximum power of the center beam, and the opposite trend occurs with the increment of the modulation depth. However, the parabolic trajectory of the Airy wavefront does not be influenced. By utilizing the unique features, the Airy beam can be used as a long distance transmission source under the PT symmetric Gaussian potentials medium.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K
2018-02-01
In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.
MacKenzie, Donald; Spears, Taylor
2014-06-01
Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Chen, Sen; Luo, Sheng Nian
2018-03-01
Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10-100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are explored via Gaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamental harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sen; Luo, Sheng-Nian
Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10–100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are exploredviaGaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamentalmore » harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.« less
Time-optimal thermalization of single-mode Gaussian states
NASA Astrophysics Data System (ADS)
Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio
2014-11-01
We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
Characterizing transient noise in the LIGO detectors
NASA Astrophysics Data System (ADS)
Nuttall, L. K.
2018-05-01
Data from the LIGO detectors typically contain many non-Gaussian noise transients which arise due to instrumental and environmental conditions. These non-Gaussian transients can be an issue for the modelled and unmodelled transient gravitational-wave searches, as they can mask or mimic a true signal. Data quality can change quite rapidly, making it imperative to track and find new sources of transient noise so that data are minimally contaminated. Several examples of transient noise and the tools used to track them are presented. These instances serve to highlight the diverse range of noise sources present at the LIGO detectors during their second observing run. This article is part of a discussion meeting issue `The promises of gravitational-wave astronomy'.
Gaussian quantum steering and its asymmetry in curved spacetime
NASA Astrophysics Data System (ADS)
Wang, Jieci; Cao, Haixin; Jing, Jiliang; Fan, Heng
2016-06-01
We study Gaussian quantum steering and its asymmetry in the background of a Schwarzschild black hole. We present a Gaussian channel description of quantum state evolution under the influence of Hawking radiation. We find that thermal noise introduced by the Hawking effect will destroy the steerability between an inertial observer Alice and an accelerated observer Bob who hovers outside the event horizon, while it generates steerability between Bob and a hypothetical observer anti-Bob inside the event horizon. Unlike entanglement behaviors in curved spacetime, here the steering from Alice to Bob suffers from a "sudden death" and the steering from anti-Bob to Bob experiences a "sudden birth" with increasing Hawking temperature. We also find that the Gaussian steering is always asymmetric and the maximum steering asymmetry cannot exceed ln 2 , which means the state never evolves to an extremal asymmetry state. Furthermore, we obtain the parameter settings that maximize steering asymmetry and find that (i) s =arccosh cosh/2r 1 -sinh2r is the critical point of steering asymmetry and (ii) the attainment of maximal steering asymmetry indicates the transition between one-way steerability and both-way steerability for the two-mode Gaussian state under the influence of Hawking radiation.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Hui
2018-05-01
Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Soumya D.; Nayak, Rajesh K.
The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity
McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.
2011-01-01
Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756
The effect of unresolved contaminant stars on the cross-matching of photometric catalogues
NASA Astrophysics Data System (ADS)
Wilson, Tom J.; Naylor, Tim
2017-07-01
A fundamental process in astrophysics is the matching of two photometric catalogues. It is crucial that the correct objects be paired, and that their photometry does not suffer from any spurious additional flux. We compare the positions of sources in Wide-field Infrared Survey Explorer (WISE), INT Photometric H α Survey, Two Micron All Sky Survey and AAVSO Photometric All Sky Survey with Gaia Data Release 1 astrometric positions. We find that the separations are described by a combination of a Gaussian distribution, wider than naively assumed based on their quoted uncertainties, and a large wing, which some authors ascribe to proper motions. We show that this is caused by flux contamination from blended stars not treated separately. We provide linear fits between the quoted Gaussian uncertainty and the core fit to the separation distributions. We show that at least one in three of the stars in the faint half of a given catalogue will suffer from flux contamination above the 1 per cent level when the density of catalogue objects per point spread function area is above approximately 0.005. This has important implications for the creation of composite catalogues. It is important for any closest neighbour matches as there will be a given fraction of matches that are flux contaminated, while some matches will be missed due to significant astrometric perturbation by faint contaminants. In the case of probability-based matching, this contamination affects the probability density function of matches as a function of distance. This effect results in up to 50 per cent fewer counterparts being returned as matches, assuming Gaussian astrometric uncertainties for WISE-Gaia matching in crowded Galactic plane regions, compared with a closest neighbour match.
Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.
Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R
2013-11-01
A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.
Theory and generation of conditional, scalable sub-Gaussian random fields
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2016-03-01
Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Ground deposition of liquid droplets released from a point source in the atmospheric surface layer
NASA Astrophysics Data System (ADS)
Panneton, Bernard
1989-01-01
A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.
NASA Astrophysics Data System (ADS)
Refaeli, Zaharit; Shamir, Yariv; Ofir, Atara; Marcus, Gilad
2018-02-01
We report a simple robust and broadly spectral-adjustable source generating near fully compressed 1053 nm 62 fs pulses directly out of a highly-nonlinear photonic crystal fiber. A dispersion-nonlinearity balance of 800 nm Ti:Sa 20 fs pulses was obtained initially by negative pre-chirping and then launching the pulses into the fibers' normal dispersion regime. Following a self-phase modulation spectral broadening, some energy that leaked below the zero dispersion point formed a soliton whose central wavelength could be tuned by Self-Frequency-Raman-Shift effect. Contrary to a common approach of power, or, fiber-length control over the shift, here we continuously varied the state of polarization, exploiting the Raman and Kerr nonlinearities responsivity for state of polarization. We obtained soliton pulses with central wavelength tuned over 150 nm, spanning from well below 1000 to over 1150 nm, of which we could select stable pulses around the 1 μm vicinity. With linewidth of > 20 nm FWHM Gaussian-like temporal-shape pulses with 62 fs duration and near flat phase structure we confirmed high quality pulse source. We believe such scheme can be used for high energy or high power glass lasers systems, such as Nd or Yb ion-doped amplifiers and systems.
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
Chaudret, Robin; Gresh, Nohad; Narth, Christophe; Lagardère, Louis; Darden, Thomas A; Cisneros, G Andrés; Piquemal, Jean-Philip
2014-09-04
We demonstrate as a proof of principle the capabilities of a novel hybrid MM'/MM polarizable force field to integrate short-range quantum effects in molecular mechanics (MM) through the use of Gaussian electrostatics. This lead to a further gain in accuracy in the representation of the first coordination shell of metal ions. It uses advanced electrostatics and couples two point dipole polarizable force fields, namely, the Gaussian electrostatic model (GEM), a model based on density fitting, which uses fitted electronic densities to evaluate nonbonded interactions, and SIBFA (sum of interactions between fragments ab initio computed), which resorts to distributed multipoles. To understand the benefits of the use of Gaussian electrostatics, we evaluate first the accuracy of GEM, which is a pure density-based Gaussian electrostatics model on a test Ca(II)-H2O complex. GEM is shown to further improve the agreement of MM polarization with ab initio reference results. Indeed, GEM introduces nonclassical effects by modeling the short-range quantum behavior of electric fields and therefore enables a straightforward (and selective) inclusion of the sole overlap-dependent exchange-polarization repulsive contribution by means of a Gaussian damping function acting on the GEM fields. The S/G-1 scheme is then introduced. Upon limiting the use of Gaussian electrostatics to metal centers only, it is shown to be able to capture the dominant quantum effects at play on the metal coordination sphere. S/G-1 is able to accurately reproduce ab initio total interaction energies within closed-shell metal complexes regarding each individual contribution including the separate contributions of induction, polarization, and charge-transfer. Applications of the method are provided for various systems including the HIV-1 NCp7-Zn(II) metalloprotein. S/G-1 is then extended to heavy metal complexes. Tested on Hg(II) water complexes, S/G-1 is shown to accurately model polarization up to quadrupolar response level. This opens up the possibility of embodying explicit scalar relativistic effects in molecular mechanics thanks to the direct transferability of ab initio pseudopotentials. Therefore, incorporating GEM-like electron density for a metal cation enable the introduction of nonambiguous short-range quantum effects within any point-dipole based polarizable force field without the need of an extensive parametrization.
Reverse engineering gene regulatory networks from measurement with missing values.
Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong
2016-12-01
Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.
Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models
NASA Astrophysics Data System (ADS)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.
Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-15
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less
Low-frequency radio constraints on the synchrotron cosmic web
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Gaensler, B. M.; Brown, S.; Lenc, E.; Norris, R. P.
2017-06-01
We present a search for the synchrotron emission from the synchrotron cosmic web by cross-correlating 180-MHz radio images from the Murchison Widefield Array with tracers of large-scale structure (LSS). We use two versions of the radio image covering 21.76° × 21.76° with point sources brighter than 0.05 Jy subtracted, with and without filtering of Galactic emission. As tracers of the LSS, we use the Two Micron All-Sky Survey and the Wide-field InfraRed Explorer redshift catalogues to produce galaxy number density maps. The cross-correlation functions all show peak amplitudes at 0°, decreasing with varying slopes towards zero correlation over a range of 1°. The cross-correlation signals include components from point source, Galactic, and extragalactic diffuse emission. We use models of the diffuse emission from smoothing the density maps with Gaussians of sizes 1-4 Mpc to find limits on the cosmic web components. From these models, we find surface brightness 99.7 per cent upper limits in the range of 0.09-2.20 mJy beam-1 (average beam size of 2.6 arcmin), corresponding to 0.01-0.30 mJy arcmin-2. Assuming equipartition between energy densities of cosmic rays and the magnetic field, the flux density limits translate to magnetic field strength limits of 0.03-1.98 μG, depending heavily on the spectral index. We conclude that for a 3σ detection of 0.1 μG magnetic field strengths via cross-correlations, image depths of sub-mJy to sub-μJy are necessary. We include discussion on the treatment and effect of extragalactic point sources and Galactic emission, and next steps for building on this work.
Illumination system development using design and analysis of computer experiments
NASA Astrophysics Data System (ADS)
Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter
2015-09-01
Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.
Symmetries for Light-Front Quantization of Yukawa Model with Renormalization
NASA Astrophysics Data System (ADS)
Żochowski, Jan; Przeszowski, Jerzy A.
2017-12-01
In this work we discuss the Yukawa model with the extra term of self-interacting scalar field in D=1+3 dimensions. We present the method of derivation the light-front commutators and anti-commutators from the Heisenberg equations induced by the kinematical generating operator of the translation P+. Mentioned Heisenberg equations are the starting point for obtaining this algebra of the (anti-) commutators. Some discrepancies between existing and proposed method of quantization are revealed. The Lorentz and the CPT symmetry, together with some features of the quantum theory were applied to obtain the two-point Wightman function for the free fermions. Moreover, these Wightman functions were computed especially without referring to the Fock expansion. The Gaussian effective potential for the Yukawa model was found in the terms of the Wightman functions. It was regularized by the space-like point-splitting method. The coupling constants within the model were redefined. The optimum mass parameters remained regularization independent. Finally, the Gaussian effective potential was renormalized.
Estimating near-road pollutant dispersion: a model inter-comparison
A model inter-comparison study to assess the abilities of steady-state Gaussian dispersion models to capture near-road pollutant dispersion has been carried out with four models (AERMOD, run with both the area-source and volume-source options to represent roadways, CALINE, versio...
Non-Gaussian limit fluctuations in active swimmer suspensions
NASA Astrophysics Data System (ADS)
Kurihara, Takashi; Aridome, Msato; Ayade, Heev; Zaid, Irwin; Mizuno, Daisuke
2017-03-01
We investigate the hydrodynamic fluctuations in suspensions of swimming microorganisms (Chlamydomonas) by observing the probe particles dispersed in the media. Short-term fluctuations of probe particles were superdiffusive and displayed heavily tailed non-Gaussian distributions. The analytical theory that explains the observed distribution was derived by summing the power-law-decaying hydrodynamic interactions from spatially distributed field sources (here, swimming microorganisms). The summing procedure, which we refer to as the physical limit operation, is applicable to a variety of physical fluctuations to which the classical central limiting theory does not apply. Extending the analytical formula to compare to experiments in active swimmer suspensions, we show that the non-Gaussian shape of the observed distribution obeys the analytic theory concomitantly with independently determined parameters such as the strength of force generations and the concentration of Chlamydomonas. Time evolution of the distributions collapsed to a single master curve, except for their extreme tails, for which our theory presents a qualitative explanation. Investigations thereof and the complete agreement with theoretical predictions revealed broad applicability of the formula to dispersions of active sources of fluctuations.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
NASA Astrophysics Data System (ADS)
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
Broadband superluminescent erbium source with multiwave pumping
NASA Astrophysics Data System (ADS)
Petrov, Andrey B.; Gumenyuk, Regina; Alimbekov, Mikhail S.; Zhelezov, Pavel E.; Kikilich, Nikita E.; Aleynik, Artem S.; Meshkovsky, Igor K.; Golant, Konstantin M.; Chamorovskii, Yuri K.; Odnoblyudov, Maxim; Filippov, Valery
2018-04-01
We demonstrate the superbroad luminescence source based on pure Er-doped fiber and two wavelength-pumping scheme. This source is capable to provide over 80 nm of spectrum bandwidth with flat spectrum shape close to Gaussian distribution. The corresponding coherence and decoherence lengths were as small as 7 μm and 85 μm, correspondingly. The parameters of Er-doped fiber luminescence source were explored theoretically and experimentally.
Ionospheric scintillation studies
NASA Technical Reports Server (NTRS)
Rino, C. L.; Freemouw, E. J.
1973-01-01
The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wenfang; Du, Jinjin; Wen, Ruijuan
We have investigated the transmission spectra of a Fabry-Perot interferometer (FPI) with squeezed vacuum state injection and non-Gaussian detection, including photon number resolving detection and parity detection. In order to show the suitability of the system, parallel studies were made of the performance of two other light sources: coherent state of light and Fock state of light either with classical mean intensity detection or with non-Gaussian detection. This shows that by using the squeezed vacuum state and non-Gaussian detection simultaneously, the resolution of the FPI can go far beyond the cavity standard bandwidth limit based on the current techniques. Themore » sensitivity of the scheme has also been explored and it shows that the minimum detectable sensitivity is better than that of the other schemes.« less
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
NASA Astrophysics Data System (ADS)
Smith, David R.; Gowda, Vinay R.; Yurduseven, Okan; Larouche, Stéphane; Lipworth, Guy; Urzhumov, Yaroslav; Reynolds, Matthew S.
2017-01-01
Wireless power transfer (WPT) has been an active topic of research, with a number of WPT schemes implemented in the near-field (coupling) and far-field (radiation) regimes. Here, we consider a beamed WPT scheme based on a dynamically reconfigurable source aperture transferring power to receiving devices within the Fresnel region. In this context, the dynamic aperture resembles a reconfigurable lens capable of focusing power to a well-defined spot, whose dimension can be related to a point spread function. The necessary amplitude and phase distribution of the field imposed over the aperture can be determined in a holographic sense, by interfering a hypothetical point source located at the receiver location with a plane wave at the aperture location. While conventional technologies, such as phased arrays, can achieve the required control over phase and amplitude, they typically do so at a high cost; alternatively, metasurface apertures can achieve dynamic focusing with potentially lower cost. We present an initial tradeoff analysis of the Fresnel region WPT concept assuming a metasurface aperture, relating the key parameters such as spot size, aperture size, wavelength, and focal distance, as well as reviewing system considerations such as the availability of sources and power transfer efficiency. We find that approximate design formulas derived from the Gaussian optics approximation provide useful estimates of system performance, including transfer efficiency and coverage volume. The accuracy of these formulas is confirmed through numerical studies.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Characterizing transient noise in the LIGO detectors.
Nuttall, L K
2018-05-28
Data from the LIGO detectors typically contain many non-Gaussian noise transients which arise due to instrumental and environmental conditions. These non-Gaussian transients can be an issue for the modelled and unmodelled transient gravitational-wave searches, as they can mask or mimic a true signal. Data quality can change quite rapidly, making it imperative to track and find new sources of transient noise so that data are minimally contaminated. Several examples of transient noise and the tools used to track them are presented. These instances serve to highlight the diverse range of noise sources present at the LIGO detectors during their second observing run.This article is part of a discussion meeting issue 'The promises of gravitational-wave astronomy'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Ultrasound beam transmission using a discretely orthogonal Gaussian aperture basis
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2018-04-01
Work is reported on development of a computational model for ultrasound beam transmission at an arbitrary geometry transmission interface for generally anisotropic materials. The work addresses problems encountered when the fundamental assumptions of ray theory do not hold, thereby introducing errors into ray-theory-based transmission models. Specifically, problems occur when the asymptotic integral analysis underlying ray theory encounters multiple stationary phase points in close proximity, due to focusing caused by concavity on either the entry surface or a material slowness surface. The approach presented here projects integrands over both the transducer aperture and the entry surface beam footprint onto a Gaussian-derived basis set, thereby distributing the integral over a summation of second-order phase integrals which are amenable to single stationary phase point analysis. Significantly, convergence is assured provided a sufficiently fine distribution of basis functions is used.
Detecting Compartmental non-Gaussian Diffusion with Symmetrized Double-PFG MRI
Paulsen, Jeffrey L.; Özarslan, Evren; Komlosh, Michal E.; Basser, Peter J.; Song, Yi-Qiao
2015-01-01
Diffusion in tissue and porous media is known to be non-Gaussian and has been used for clinical indications of stroke and other tissue pathologies. However, when conventional NMR techniques are applied to biological tissues and other heterogeneous materials, the presence of multiple compartments (pores) with different Gaussian diffusivities will also contribute to the measurement of non-Gaussian behavior. Here we present Symmetrized Double PFG (sd-PFG), which can separate these two contributions to non-Gaussian signal decay as having distinct angular modulation frequencies. In contrast to prior angular d-PFG methods, sd-PFG can unambiguously extract kurtosis as an oscillation from samples with isotropic or uniformly oriented anisotropic pores, and can generally extract a combination of compartmental anisotropy and kurtosis. The method further fixes its sensitivity with respect to the time-dependence of the apparent diffusion coefficient. We experimentally demonstrate the measurement of the fourth moment (kurtosis) of diffusion and find it consistent with theoretical predictions. By enabling the unambiguous identification of contributions of compartmental kurtosis to the signal, sd-PFG has the potential to help identify the underlying micro-structural changes corresponding to current kurtosis based diagnostics and act as a novel source of contrast to better resolve tissue micro-structure. PMID:26434812
Turbulent Plume Dispersion over Two-dimensional Idealized Urban Street Canyons
NASA Astrophysics Data System (ADS)
Wong, C. C. C.; Liu, C. H.
2012-04-01
Human activities are the primary pollutant sources which degrade the living quality in the current era of dense and compact cities. A simple and reasonably accurate pollutant dispersion model is helpful to reduce pollutant concentrations in city or neighborhood scales by refining architectural design or urban planning. The conventional method to estimate the pollutant concentration from point/line sources is the Gaussian plume model using empirical dispersion coefficients. Its accuracy is pretty well for applying to rural areas. However, the dispersion coefficients only account for the atmospheric stability and streamwise distance that often overlook the roughness of urban surfaces. Large-scale buildings erected in urban areas significantly modify the surface roughness that in turn affects the pollutant transport in the urban canopy layer (UCL). We hypothesize that the aerodynamic resistance is another factor governing the dispersion coefficient in the UCL. This study is thus conceived to study the effects of urban roughness on pollutant dispersion coefficients and the plume behaviors. Large-eddy simulations (LESs) are carried out to examine the plume dispersion from a ground-level pollutant source over idealized 2D street canyons in neutral stratification. Computations with a wide range of aspect ratios (ARs), including skimming flow to isolated flow regimes, are conducted. The vertical profiles of pollutant distribution for different values of friction factor are compared that all reach a self-similar Gaussian shape. Preliminary results show that the pollutant dispersion is closely related to the friction factor. For relatively small roughness, the factors of dispersion coefficient vary linearly with the friction factor until the roughness is over a certain level. When the friction factor is large, its effect on the dispersion coefficient is less significant. Since the linear region covers at least one-third of the full range of friction factor in our empirical analysis, urban roughness is a major factor for dispersion coefficient. The downstream air quality could then be a function of both atmospheric stability and urban roughness.
GRBs as standard candles: There is no “circularity problem” (and there never was)
NASA Astrophysics Data System (ADS)
Graziani, Carlo
2011-02-01
Beginning with the 2002 discovery of the "Amati Relation" of GRB spectra, there has been much interest in the possibility that this and other correlations of GRB phenomenology might be used to make GRBs into standard candles. One recurring apparent difficulty with this program has been that some of the primary observational quantities to be fit as "data" - to wit, the isotropic-equivalent prompt energy Eiso and the collimation-corrected "total" prompt energy Eγ - depend for their construction on the very cosmological models that they are supposed to help constrain. This is the so-called "circularity problem" of standard candle GRBs. This paper is intended to point out that the circularity problem is not in fact a problem at all, except to the extent that it amounts to a self-inflicted wound. It arises essentially because of an unfortunate choice of data variables - "source-frame" variables such as Eiso, which are unnecessarily encumbered by cosmological considerations. If, instead, the empirical correlations of GRB phenomenology which are formulated in source-variables are mapped to the primitive observational variables (such as fluence) and compared to the observations in that space, then all taint of circularity disappears. I also indicate here a set of procedures for encoding high-dimensional empirical correlations (such as between Eiso, Epk(src),tjet(src), and T45(src)) in a "Gaussian Tube" smeared model that includes both the correlation and its intrinsic scatter, and how that source-variable model may easily be mapped to the space of primitive observables, to be convolved with the measurement errors and fashioned into a likelihood. I discuss the projections of such Gaussian tubes into sub-spaces, which may be used to incorporate data from GRB events that may lack some element of the data (for example, GRBs without ascertained jet-break times). In this way, a large set of inhomogeneously observed GRBs may be assimilated into a single analysis, so long as each possesses at least two correlated data attributes.
Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks.
Chabanon, Morgan; Rangamani, Padmini
2018-03-28
Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, P; Chang Gung University, Taoyuan, Taiwan; Huang, H
Purpose: In this study, we present an effective method to derive low dose envelope of the proton in-air spot fluence at beam positions other than the isocenter to reduce amount of measurements required for planning commission. Also, we demonstrate commissioning and validation results of this method to the Eclipse treatment planning system (version 13.0.29) for a Sumitomo dedicated proton line scanning beam nozzle. Methods: The in-air spot profiles at five beam-axis positions (±200, ±100 and 0 mm) were obtained in trigger mode using a MP3 Water tank (PTW-Freiburg) and a pinpoint ionization chamber (model 31014, PTW-Freiburg). Low dose envelope (belowmore » 1% of the center dose) of the spot profile at isocenter was obtained by repeated point measurements to minimize dosimetry uncertainty. The double Gaussian (DG) model was used to fit and obtain optimal σ1, σ2 and their corresponding weightings through our in-house MATLAB (Mathworks) program. σ1, σ2 were assumed to expand linearly along the beam axis from a virtual source position calculated by back projecting fitted sigmas from the single Gaussian (SG) model. Absolute doses in water were validated using an Advanced Markus chamber at the depth of 2cm with Pristine Peak (BP) R90d ranging from 5–32 cm for 10×10 cm2 scanned fields. The field size factors were verified with square fields from 2 to 20 cm at 2cm and before BP depth. Results: The absolute dose outputs were found to be within ±3%. For field size factor, the agreement between calculated and measurement were within ±2% at 2cm and ±3% before BP, except for the field size below 2×2 cm2. Conclusion: The double Gaussian model was found to be sufficient for characterizing the Sumitomo dedicated proton line scanning nozzle. With our effective double Gaussian fitting method, we are able to save significant proton beam time with acceptable output accuracy.« less
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Barratt, Craig H.
1990-01-01
Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.
Statistics and topology of the COBE differential microwave radiometer first-year sky maps
NASA Technical Reports Server (NTRS)
Smoot, G. F.; Tenorio, L.; Banday, A. J.; Kogut, A.; Wright, E. L.; Hinshaw, G.; Bennett, C. L.
1994-01-01
We use statistical and topological quantities to test the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) first-year sky maps against the hypothesis that the observed temperature fluctuations reflect Gaussian initial density perturbations with random phases. Recent papers discuss specific quantities as discriminators between Gaussian and non-Gaussian behavior, but the treatment of instrumental noise on the data is largely ignored. The presence of noise in the data biases many statistical quantities in a manner dependent on both the noise properties and the unknown cosmic microwave background temperature field. Appropriate weighting schemes can minimize this effect, but it cannot be completely eliminated. Analytic expressions are presented for these biases, and Monte Carlo simulations are used to assess the best strategy for determining cosmologically interesting information from noisy data. The genus is a robust discriminator that can be used to estimate the power-law quadrupole-normalized amplitude, Q(sub rms-PS), independently of the two-point correlation function. The genus of the DMR data is consistent with Gaussian initial fluctuations with Q(sub rms-PS) = (15.7 +/- 2.2) - (6.6 +/- 0.3)(n - 1) micro-K, where n is the power-law index. Fitting the rms temperature variations at various smoothing angles gives Q(sub rms-PS) = 13.2 +/- 2.5 micro-K and n = 1.7(sup (+0.3) sub (-0.6)). While consistent with Gaussian fluctuations, the first year data are only sufficient to rule out strongly non-Gaussian distributions of fluctuations.
NASA Astrophysics Data System (ADS)
Vio, R.; Vergès, C.; Andreani, P.
2017-08-01
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
Radiation detector spectrum simulator
Wolf, Michael A.; Crowell, John M.
1987-01-01
A small battery operated nuclear spectrum simulator having a noise source nerates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith generates several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
Radiation detector spectrum simulator
Wolf, M.A.; Crowell, J.M.
1985-04-09
A small battery operated nuclear spectrum simulator having a noise source generates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith to generate several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
DOT National Transportation Integrated Search
2000-06-19
The Environmental Protection Agency (EPA) currently recommends the use of CALINE3 or CAL3QHC for modeling the dispersion of carbon monoxide (CO) near roadways. These models treat vehicles as part of a line source such that the emissions are homogeneo...
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
Wind-tunnel Modelling of Dispersion from a Scalar Area Source in Urban-Like Roughness
NASA Astrophysics Data System (ADS)
Pascheke, Frauke; Barlow, Janet F.; Robins, Alan
2008-01-01
A wind-tunnel study was conducted to investigate ventilation of scalars from urban-like geometries at neighbourhood scale by exploring two different geometries a uniform height roughness and a non-uniform height roughness, both with an equal plan and frontal density of λ p = λ f = 25%. In both configurations a sub-unit of the idealized urban surface was coated with a thin layer of naphthalene to represent area sources. The naphthalene sublimation method was used to measure directly total area-averaged transport of scalars out of the complex geometries. At the same time, naphthalene vapour concentrations controlled by the turbulent fluxes were detected using a fast Flame Ionisation Detection (FID) technique. This paper describes the novel use of a naphthalene coated surface as an area source in dispersion studies. Particular emphasis was also given to testing whether the concentration measurements were independent of Reynolds number. For low wind speeds, transfer from the naphthalene surface is determined by a combination of forced and natural convection. Compared with a propane point source release, a 25% higher free stream velocity was needed for the naphthalene area source to yield Reynolds-number-independent concentration fields. Ventilation transfer coefficients w T / U derived from the naphthalene sublimation method showed that, whilst there was enhanced vertical momentum exchange due to obstacle height variability, advection was reduced and dispersion from the source area was not enhanced. Thus, the height variability of a canopy is an important parameter when generalising urban dispersion. Fine resolution concentration measurements in the canopy showed the effect of height variability on dispersion at street scale. Rapid vertical transport in the wake of individual high-rise obstacles was found to generate elevated point-like sources. A Gaussian plume model was used to analyse differences in the downstream plumes. Intensified lateral and vertical plume spread and plume dilution with height was found for the non-uniform height roughness.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data
NASA Astrophysics Data System (ADS)
Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth
2012-03-01
The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref
NASA Astrophysics Data System (ADS)
Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.
2018-07-01
Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Borland, M.
Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less
NASA Astrophysics Data System (ADS)
Cascio, David M.
1988-05-01
States of nature or observed data are often stochastically modelled as Gaussian random variables. At times it is desirable to transmit this information from a source to a destination with minimal distortion. Complicating this objective is the possible presence of an adversary attempting to disrupt this communication. In this report, solutions are provided to a class of minimax and maximin decision problems, which involve the transmission of a Gaussian random variable over a communications channel corrupted by both additive Gaussian noise and probabilistic jamming noise. The jamming noise is termed probabilistic in the sense that with nonzero probability 1-P, the jamming noise is prevented from corrupting the channel. We shall seek to obtain optimal linear encoder-decoder policies which minimize given quadratic distortion measures.
Relativistic corrections and non-Gaussianity in radio continuum surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maartens, Roy; Zhao, Gong-Bo; Bacon, David
Forthcoming radio continuum surveys will cover large volumes of the observable Universe and will reach to high redshifts, making them potentially powerful probes of dark energy, modified gravity and non-Gaussianity. We consider the continuum surveys with LOFAR, WSRT and ASKAP, and examples of continuum surveys with the SKA. We extend recent work on these surveys by including redshift space distortions and lensing convergence in the radio source auto-correlation. In addition we compute the general relativistic (GR) corrections to the angular power spectrum. These GR corrections to the standard Newtonian analysis of the power spectrum become significant on scales near andmore » beyond the Hubble scale at each redshift. We find that the GR corrections are at most percent-level in LOFAR, WODAN and EMU surveys, but they can produce O(10%) changes for high enough sensitivity SKA continuum surveys. The signal is however dominated by cosmic variance, and multiple-tracer techniques will be needed to overcome this problem. The GR corrections are suppressed in continuum surveys because of the integration over redshift — we expect that GR corrections will be enhanced for future SKA HI surveys in which the source redshifts will be known. We also provide predictions for the angular power spectra in the case where the primordial perturbations have local non-Gaussianity. We find that non-Gaussianity dominates over GR corrections, and rises above cosmic variance when f{sub NL}∼>5 for SKA continuum surveys.« less
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1989-01-01
The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.
High-precision positioning system of four-quadrant detector based on the database query
NASA Astrophysics Data System (ADS)
Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang
2015-02-01
The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.
Neutral Beam Injection System for the SHIP Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdrashitov, G.F.; Abdrashitov, A.G.; Anikeev, A.V.
2005-01-15
The injector ion source is based on an arcdischarge plasma box. The plasma emitter is produced by a 1 kA arc discharge in deuterium. A multipole magnetic field produced with permanent magnets at the periphery of the plasma box is used to increase its efficiency and improve homogeneity of the plasma emitter. The ion beam is extracted by a 4-electrodes ion optical system (IOS). Initial beam diameter is 200 mm. The grids of the IOS have a spherical curvature for geometrical focusing of the beam. The optimal IOS geometry and grid potentials were found by means of numerical simulation tomore » provide precise beam formation. The measured angular divergence of the beam is 0.025 rad, which corresponds to a 4.7 cm Gaussian radius of the beam profile measured at focal point.« less
The confining baryonic Y-strings on the lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakry, Ahmed S.; Chen, Xurong; Zhang, Peng-Ming
2016-01-22
In a string picture, the nucleon is conjectured as consisting of a Y-shaped gluonic string ended by constituent quarks. In this proceeding, we summarize our results on revealing the signature of the confining Y-bosonic string in the gluonic profile due to a system of three static quarks on the lattice at finite temperature. The analysis of the action density unveils a background of a filled-Δ distribution. However, we found that these Δ-shaped profiles are comprised of three Y-shaped Gaussian-like flux tubes. The length of the revealed Y-string-like distribution is maximum near the deconfinement point and approaches the geometrical minimal nearmore » the end of the QCD plateau. The action density width profile returns good fits to a baryonic string model for the junction fluctuations at large quark source separation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu
2016-05-07
Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.
Perception of local three-dimensional shape.
Phillips, F; Todd, J T
1996-08-01
The authors present a series of 4 experiments designed to test the ability to perceive local shape information. Observers were presented with various smoothly varying 3-dimensional surfaces where they reported shape index and sign of Gaussian curvature at several probe locations. Results show that observers are poor at making judgments based on these local measures, especially when the region surrounding the local point is restricted or manipulated to make it noncoherent. Shape index judgments required at least 2 degrees of context surrounding the probe location, and performance on sign of Gaussian curvature judgments deteriorated as the contextual information was restricted as well.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
Multiscale registration algorithm for alignment of meshes
NASA Astrophysics Data System (ADS)
Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.
2004-03-01
Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.
Sensitivity in MALDI MS with small spot sizes
NASA Astrophysics Data System (ADS)
Yamchuk, Andriy
In MALDI, for laser fluences below the saturation point the ion yield per shot follows a cubic dependence on the irradiated area, leading to a conclusion that smaller spots produce overall less ions and therefore are less viable. However, Qiao et al. showed that by decreasing the laser spot size it is possible to raise the saturation point, and thus increase the ion yield per unit area, also known as sensitivity. Here we explore laser spots below 10 micrometer diameter to determine whether they offer any practical advantage. We show that sensitivity is greater for a flat-top 3--4 micrometer spot than for a 10 micrometer spot. The sensitivity is greater for a Gaussian-like 3--5 micrometer spot than for flat-top 5--25 micrometer spots. We also report for the first time sensitivity versus theoretical fluence profile for a Gaussian-like beam focu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shortis, M.; Johnston, G.
1997-11-01
In a previous paper, the results of photogrammetric measurements of a number of paraboloidal reflecting surfaces were presented. These results showed that photogrammetry can provide three-dimensional surface characterizations of such solar concentrators. The present paper describes the assessment of the quality of these surfaces as a derivation of the photogrammetrically produced surface coordinates. Statistical analysis of the z-coordinate distribution of errors indicates that these generally conform to a univariate Gaussian distribution, while the numerical assessment of the surface normal vectors on these surfaces indicates that the surface normal deviations appear to follow an approximately bivariate Gaussian distribution. Ray tracing ofmore » the measured surfaces to predict the expected flux distribution at the focal point of the 400 m{sup 2} dish show a close correlation with the videographically measured flux distribution at the focal point of the dish.« less
Gaussian entanglement distribution with gigahertz bandwidth.
Ast, Stefan; Ast, Melanie; Mehmet, Moritz; Schnabel, Roman
2016-11-01
The distribution of entanglement with Gaussian statistic can be used to generate a mathematically proven secure key for quantum cryptography. The distributed secret key rate is limited by the entanglement strength, the entanglement bandwidth, and the bandwidth of the photoelectric detectors. The development of a source for strongly bipartite entangled light with high bandwidth promises an increased measurement speed and a linear boost in the secure data rate. Here, we present the experimental realization of a Gaussian entanglement source with a bandwidth of more than 1.25 GHz. The entanglement spectrum was measured with balanced homodyne detectors and was quantified via the inseparability criterion introduced by Duan and coworkers with a critical value of 4 below which entanglement is certified. Our measurements yielded an inseparability value of about 1.8 at a frequency of 300 MHz to about 2.8 at 1.2 GHz, extending further to about 3.1 at 1.48 GHz. In the experiment we used two 2.6 mm long monolithic periodically poled potassium titanyl phosphate (KTP) resonators to generate two squeezed fields at the telecommunication wavelength of 1550 nm. Our result proves the possibility of generating and detecting strong continuous-variable entanglement with high speed.
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
Large-scale 3D galaxy correlation function and non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele
We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less
Gaussian impurity moving through a Bose-Einstein superfluid
NASA Astrophysics Data System (ADS)
Pinsker, Florian
2017-09-01
In this paper a finite Gaussian impurity moving through an equilibrium Bose-Einstein condensate at T = 0 is studied. The problem can be described by a Gross-Pitaevskii equation, which is solved perturbatively. The analysis is done for systems of 2 and 3 spatial dimensions. The Bogoliubov equation solutions for the condensate perturbed by a finite impurity are calculated in the co-moving frame. From these solutions the total energy of the perturbed system is determined as a function of the width and the amplitude of the moving Gaussian impurity and its velocity. In addition we derive the drag force the finite sized impurity approximately experiences as it moves through the superfluid, which proves the existence of a superfluid phase for finite extensions of the impurities below the speed of sound. Finally we find that the force increases with velocity until an inflection point from which it decreases again in 2 and 3d.
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
Self-Consistent Field Theory of Gaussian Ring Polymers
NASA Astrophysics Data System (ADS)
Kim, Jaeup; Yang, Yong-Biao; Lee, Won Bo
2012-02-01
Ring polymers, being free from chain ends, have fundamental importance in understanding the polymer statics and dynamics which are strongly influenced by the chain end effects. At a glance, their theoretical treatment may not seem particularly difficult, but the absence of chain ends and the topological constraints make the problem non-trivial, which results in limited success in the analytical or semi-analytical formulation of ring polymer theory. Here, I present a self-consistent field theory (SCFT) formalism of Gaussian (topologically unconstrained) ring polymers for the first time. The resulting static property of homogeneous and inhomogeneous ring polymers are compared with the random phase approximation (RPA) results. The critical point for ring homopolymer system is exactly the same as the linear polymer case, χN = 2, since a critical point does not depend on local structures of polymers. The critical point for ring diblock copolymer melts is χN 17.795, which is approximately 1.7 times of that of linear diblock copolymer melts, χN 10.495. The difference is due to the ring structure constraint.
NASA Astrophysics Data System (ADS)
Lu, Peng; Lin, Wenpeng; Niu, Zheng; Su, Yirong; Wu, Jinshui
2006-10-01
Nitrogen (N) is one of the main factors affecting environmental pollution. In recent years, non-point source pollution and water body eutrophication have become increasing concerns for both scientists and the policy-makers. In order to assess the environmental hazard of soil total N pollution, a typical ecological unit was selected as the experimental site. This paper showed that Box-Cox transformation achieved normality in the data set, and dampened the effect of outliers. The best theoretical model of soil total N was a Gaussian model. Spatial variability of soil total N at NE60° and NE150° directions showed that it had a strip anisotropic structure. The ordinary kriging estimate of soil total N concentration was mapped. The spatial distribution pattern of soil total N in the direction of NE150° displayed a strip-shaped structure. Kriging standard deviations (KSD) provided valuable information that will increase the accuracy of total N mapping. The probability kriging method is useful to assess the hazard of N pollution by providing the conditional probability of N concentration exceeding the threshold value, where we found soil total N>2.0g/kg. The probability distribution of soil total N will be helpful to conduct hazard assessment, optimal fertilization, and develop management practices to control the non-point sources of N pollution.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin
2009-01-01
We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.
NASA Astrophysics Data System (ADS)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler
2017-06-01
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.
Detecting compartmental non-Gaussian diffusion with symmetrized double-PFG MRI.
Paulsen, Jeffrey L; Özarslan, Evren; Komlosh, Michal E; Basser, Peter J; Song, Yi-Qiao
2015-11-01
Diffusion in tissue and porous media is known to be non-Gaussian and has been used for clinical indications of stroke and other tissue pathologies. However, when conventional NMR techniques are applied to biological tissues and other heterogeneous materials, the presence of multiple compartments (pores) with different Gaussian diffusivities will also contribute to the measurement of non-Gaussian behavior. Here we present symmetrized double PFG (sd-PFG), which can separate these two contributions to non-Gaussian signal decay as having distinct angular modulation frequencies. In contrast to prior angular d-PFG methods, sd-PFG can unambiguously extract kurtosis as an oscillation from samples with isotropic or uniformly oriented anisotropic pores, and can generally extract a combination of compartmental anisotropy and kurtosis. The method further fixes its sensitivity with respect to the time dependence of the apparent diffusion coefficient. We experimentally demonstrate the measurement of the fourth cumulant (kurtosis) of diffusion and find it consistent with theoretical predictions. By enabling the unambiguous identification of contributions of compartmental kurtosis to the signal, sd-PFG has the potential to help identify the underlying micro-structural changes corresponding to current kurtosis based diagnostics, and act as a novel source of contrast to better resolve tissue micro-structure. Copyright © 2015 John Wiley & Sons, Ltd.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
NASA Astrophysics Data System (ADS)
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.
Future constraints on angle-dependent non-Gaussianity from large radio surveys
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Shiraishi, Maresuke; Bartolo, Nicola; Bertacca, Daniele; Liguori, Michele; Matarrese, Sabino; Norris, Ray P.; Parkinson, David
2017-03-01
We investigate how well future large-scale radio surveys could measure different shapes of primordial non-Gaussianity; in particular we focus on angle-dependent non-Gaussianity arising from primordial anisotropic sources, whose bispectrum has an angle dependence between the three wavevectors that is characterized by Legendre polynomials PL and expansion coefficients cL. We provide forecasts for measurements of galaxy power spectrum, finding that Large-Scale Structure (LSS) data could allow measurements of primordial non-Gaussianity that would be competitive with, or improve upon, current constraints set by CMB experiments, for all the shapes considered. We argue that the best constraints will come from the possibility to assign redshift information to radio galaxy surveys, and investigate a few possible scenarios for the EMU and SKA surveys. A realistic (futuristic) modeling could provide constraints of fNLloc ≈ 1(0 . 5) for the local shape, fNL of O(10) (O(1)) for the orthogonal, equilateral and folded shapes, and cL=1 ≈ 80(2) , cL=2 ≈ 400(10) for angle-dependent non-Gaussianity showing that only futuristic galaxy surveys will be able to set strong constraints on these models. Nevertheless, the more futuristic forecasts show the potential of LSS analyses to considerably improve current constraints on non-Gaussianity, and so on models of the primordial Universe. Finally, we find the minimum requirements that would be needed to reach σ(cL=1) = 10, which can be considered as a typical (lower) value predicted by some (inflationary) models.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R
2017-05-01
To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A simple method for astigmatic compensation of folded resonator without Brewster window.
Qiao, Wen; Xiaojun, Zhang; Yonggang, Wang; Liqun, Sun; Hanben, Niu
2014-02-10
A folded resonator requires an oblique angle of incidence on the folded curved mirror, which introduces astigmatic distortions that limit the performance of the lasers. We present a simple method to compensate the astigmatism of folded resonator without Brewster windows for the first time to the best of our knowledge. Based on the theory of the propagation and transformation of Gaussian beams, the method is both effective and reliable. Theoretical results show that the folded resonator can be compensated astigmatism completely when the following two conditions are fulfilled. Firstly, when the Gaussian beam with a determined size beam waist is obliquely incident on an off-axis concave mirror, two new Gaussian beam respectively in the tangential and sagittal planes are formed. Another off-axis concave mirror is located at another intersection point of the two new Gaussian beams. Secondly, adjusting the incident angle of the second concave mirror or its focal length can make the above two Gaussian beam coincide in the image plane of the second concave mirror, which compensates the astigmatic aberration completely. A side-pumped continues-wave (CW) passively mode locked Nd:YAG laser was taken as an example of the astigmatically compensated folded resonators. The experimental results show good agreement with the theoretical predictions. This method can be used effectively to design astigmatically compensated cavities resonator of high-performance lasers.
NASA Astrophysics Data System (ADS)
Olsen, M. K.
2017-02-01
We propose and analyze a pumped and damped Bose-Hubbard dimer as a source of continuous-variable Einstein-Podolsky-Rosen (EPR) steering with non-Gaussian statistics. We use and compare the results of the approximate truncated Wigner and the exact positive-P representation to calculate and compare the predictions for intensities, second-order quantum correlations, and third- and fourth-order cumulants. We find agreement for intensities and the products of inferred quadrature variances, which indicate that states demonstrating the EPR paradox are present. We find clear signals of non-Gaussianity in the quantum states of the modes from both the approximate and exact techniques, with quantitative differences in their predictions. Our proposed experimental configuration is extrapolated from current experimental techniques and adds another apparatus to the current toolbox of quantum atom optics.
Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting
Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan
2013-01-01
In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140
NASA Astrophysics Data System (ADS)
Snare, Dustin A.
Recent increases in oil and gas production from unconventional reservoirs has brought with it an increase of methane emissions. Estimating methane emissions from oil and gas production is complex due to differences in equipment designs, maintenance, and variable product composition. Site access to oil and gas production equipment can be difficult and time consuming, making remote assessment of emissions vital to understanding local point source emissions. This work presents measurements of methane leakage made from a new ground-based mobile laboratory and a research aircraft around oil and gas fields in the Upper Green River Basin (UGRB) of Wyoming in 2014. It was recently shown that the application of the Point Source Gaussian (PSG) method, utilizing atmospheric dispersion tables developed by US EPA (Appendix B), is an effective way to accurately measure methane flux from a ground-based location downwind of a source without the use of a tracer (Brantley et al., 2014). Aircraft measurements of methane enhancement regions downwind of oil and natural gas production and Planetary Boundary Layer observations are utilized to obtain a flux for the entire UGRB. Methane emissions are compared to volumes of natural gas produced to derive a leakage rate from production operations for individual production sites and basin-wide production. Ground-based flux estimates derive a leakage rate of 0.14 - 0.78 % (95 % confidence interval) per site with a mass-weighted average (MWA) of 0.20 % for all sites. Aircraft-based flux estimates derive a MWA leakage rate of 0.54 - 0.91 % for the UGRB.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
NASA Astrophysics Data System (ADS)
Bernstein, Leslie R.; Trahiotis, Constantine
2003-06-01
An acoustic pointing task was used to determine whether interaural temporal disparities (ITDs) conveyed by high-frequency ``transposed'' stimuli would produce larger extents of laterality than ITDs conveyed by bands of high-frequency Gaussian noise. The envelopes of transposed stimuli are designed to provide high-frequency channels with information similar to that conveyed by the waveforms of low-frequency stimuli. Lateralization was measured for low-frequency Gaussian noises, the same noises transposed to 4 kHz, and high-frequency Gaussian bands of noise centered at 4 kHz. Extents of laterality obtained with the transposed stimuli were greater than those obtained with bands of Gaussian noise centered at 4 kHz and, in some cases, were equivalent to those obtained with low-frequency stimuli. In a second experiment, the general effects on lateral position produced by imposed combinations of bandwidth, ITD, and interaural phase disparities (IPDs) on low-frequency stimuli remained when those stimuli were transposed to 4 kHz. Overall, the data were fairly well accounted for by a model that computes the cross-correlation subsequent to known stages of peripheral auditory processing augmented by low-pass filtering of the envelopes within the high-frequency channels of each ear.
Consistency relation and non-Gaussianity in a Galileon inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asadi, Kosar; Nozari, Kourosh, E-mail: k.asadi@stu.umz.ac.ir, E-mail: knozari@umz.ac.ir
2016-12-01
We study a particular Galileon inflation in the light of Planck2015 observational data in order to constraint the model parameter space. We study the spectrum of the primordial modes of the density perturbations by expanding the action up to the second order in perturbations. Then we pursue by expanding the action up to the third order and find the three point correlation functions to find the amplitude of the non-Gaussianity of the primordial perturbations in this setup. We study the amplitude of the non-Gaussianity both in equilateral and orthogonal configurations and test the model with recent observational data. Our analysismore » shows that for some ranges of the non-minimal coupling parameter, the model is consistent with observation and it is also possible to have large non-Gaussianity which would be observable by future improvements in experiments. Moreover, we obtain the tilt of the tensor power spectrum and test the standard inflationary consistency relation ( r = −8 n {sub T} ) against the latest bounds from the Planck2015 dataset. We find a slight deviation from the standard consistency relation in this setup. Nevertheless, such a deviation seems not to be sufficiently remarkable to be detected confidently.« less
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
Study of the intensity noise and intensity modulation in a of hybrid soliton pulsed source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dogru, Nuran; Oziazisi, M Sadetin
2005-10-31
The relative intensity noise (RIN) and small-signal intensity modulation (IM) of a hybrid soliton pulsed source (HSPS) with a linearly chirped Gaussian apodised fibre Bragg grating (FBG) are considered in the electric-field approximation. The HSPS is described by solving the dynamic coupled-mode equations. It is shown that consideration of the carrier density noise in the HSPS in addition to the spontaneous noise is necessary to analyse accurately noise in the mode-locked HSPS. It is also shown that the resonance peak spectral splitting (RPSS) of the IM near the frequency inverse to the round-trip time of light in the external cavitymore » can be eliminated by selecting an appropriate linear chirp rate in the Gaussian apodised FBG. (laser applications and other topics in quantum electronics)« less
Control of atomic transition rates via laser-light shaping
NASA Astrophysics Data System (ADS)
Jáuregui, R.
2015-04-01
A modular systematic analysis of the feasibility of modifying atomic transition rates by tailoring the electromagnetic field of an external coherent light source is presented. The formalism considers both the center of mass and internal degrees of freedom of the atom, and all properties of the field: frequency, angular spectrum, and polarization. General features of recoil effects for internal forbidden transitions are discussed. A comparative analysis of different structured light sources is explicitly worked out. It includes spherical waves, Gaussian beams, Laguerre-Gaussian beams, and propagation invariant beams with closed analytical expressions. It is shown that increments in the order of magnitude of the transition rates for Gaussian and Laguerre-Gaussian beams, with respect to those obtained in the paraxial limit, require waists of the order of the wavelength, while propagation invariant modes may considerably enhance transition rates under more favorable conditions. For transitions that can be naturally described as modifications of the atomic angular momentum, this enhancement is maximal (within propagation invariant beams) for Bessel modes, Mathieu modes can be used to entangle the internal and center-of-mass involved states, and Weber beams suppress this kind of transition unless they have a significant component of odd modes. However, if a recoil effect of the transition with an adequate symmetry is allowed, the global transition rate (center of mass and internal motion) can also be enhanced using Weber modes. The global analysis presented reinforces the idea that a better control of the transitions between internal atomic states requires both a proper control of the available states of the atomic center of mass, and shaping of the background electromagnetic field.
Indications for a critical point in the phase diagram for hot and dense nuclear matter
NASA Astrophysics Data System (ADS)
Lacey, Roy A.
2016-12-01
Two-pion interferometry measurements are studied for a broad range of collision centralities in Au+Au (√{sNN} = 7.7- 200 GeV) and Pb+Pb (√{sNN} = 2.76 TeV) collisions. They indicate non-monotonic excitation functions for the Gaussian emission source radii difference (Rout -Rside), suggestive of reaction trajectories which spend a fair amount of time near a soft point in the equation of state (EOS) that coincides with the critical end point (CEP). A Finite-Size Scaling (FSS) analysis of these excitation functions, provides further validation tests for the CEP. It also indicates a second order phase transition at the CEP, and the values Tcep ∼ 165 MeV and μBcep ∼ 95 MeV for its location in the (T ,μB)-plane of the phase diagram. The static critical exponents (ν ≈ 0.66 and γ ≈ 1.2) extracted via the same FSS analysis, place this CEP in the 3D Ising model (static) universality class. A Dynamic Finite-Size Scaling analysis of the excitation functions, gives the estimate z ∼ 0.87 for the dynamic critical exponent, suggesting that the associated critical expansion dynamics is dominated by the hydrodynamic sound mode.
Numerical modeling on carbon fiber composite material in Gaussian beam laser based on ANSYS
NASA Astrophysics Data System (ADS)
Luo, Ji-jun; Hou, Su-xia; Xu, Jun; Yang, Wei-jun; Zhao, Yun-fang
2014-02-01
Based on the heat transfer theory and finite element method, the macroscopic ablation model of Gaussian beam laser irradiated surface is built and the value of temperature field and thermal ablation development is calculated and analyzed rationally by using finite element software of ANSYS. Calculation results show that the ablating form of the materials in different irritation is of diversity. The laser irradiated surface is a camber surface rather than a flat surface, which is on the lowest point and owns the highest power density. Research shows that the higher laser power density absorbed by material surface, the faster the irritation surface regressed.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Dynamics and Control of Tethered Antennas/Reflectors in Orbit
1992-02-01
reflector system. The optimal linear quadratic Gaussian (LQG) digital con- trol of the orbiting tethered antenna/reflector system is analyzed. The...flexibility of both the antenna and the tether are included in this high order system model. With eight point actuators optimally positioned together with...able to maintain satisfactory pointing accuracy for low and moderate altitude orbits under the influence of solar pressure. For the higher altitudes a
Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control
Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.
1997-01-01
One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behbahani, Siavosh R.; /SLAC /Stanford U., Phys. Dept. /Boston U.; Dymarsky, Anatoly
2012-06-06
We apply the Effective Field Theory of Inflation to study the case where the continuous shift symmetry of the Goldstone boson {pi} is softly broken to a discrete subgroup. This case includes and generalizes recently proposed String Theory inspired models of Inflation based on Axion Monodromy. The models we study have the property that the 2-point function oscillates as a function of the wavenumber, leading to oscillations in the CMB power spectrum. The non-linear realization of time diffeomorphisms induces some self-interactions for the Goldstone boson that lead to a peculiar non-Gaussianity whose shape oscillates as a function of the wavenumber.more » We find that in the regime of validity of the effective theory, the oscillatory signal contained in the n-point correlation functions, with n > 2, is smaller than the one contained in the 2-point function, implying that the signature of oscillations, if ever detected, will be easier to find first in the 2-point function, and only then in the higher order correlation functions. Still the signal contained in higher-order correlation functions, that we study here in generality, could be detected at a subleading level, providing a very compelling consistency check for an approximate discrete shift symmetry being realized during inflation.« less
A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.
Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco
2018-01-01
Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Laser beam shaping for biomedical microscopy techniques
NASA Astrophysics Data System (ADS)
Laskin, Alexander; Kaiser, Peter; Laskin, Vadim; Ostrun, Aleksei
2016-04-01
Uniform illumination of a working field is very important in optical systems of confocal microscopy and various implementations of fluorescence microscopy like TIR, SSIM, STORM, PALM to enhance performance of these laser-based research techniques. Widely used TEM00 laser sources are characterized by essentially non-uniform Gaussian intensity profile which leads usually to non-uniform intensity distribution in a microscope working field or in a field of microlenses array of a confocal microscope optical system, this non-uniform illumination results in instability of measuring procedure and reducing precision of quantitative measurements. Therefore transformation of typical Gaussian distribution of a TEM00 laser to flat-top (top hat) profile is an actual technical task, it is solved by applying beam shaping optics. Due to high demands to optical image quality the mentioned techniques have specific requirements to a uniform laser beam: flatness of phase front and extended depth of field, - from this point of view the microscopy techniques are similar to holography and interferometry. There are different refractive and diffractive beam shaping approaches used in laser industrial and scientific applications, but only few of them are capable to fulfil the optimum conditions for beam quality required in discussed microscopy techniques. We suggest applying refractive field mapping beam shapers πShaper, which operational principle presumes almost lossless transformation of Gaussian to flat-top beam with flatness of output wavefront, conserving of beam consistency, providing collimated low divergent output beam, high transmittance, extended depth of field, negligible wave aberration, and achromatic design provides capability to work with several lasers with different wavelengths simultaneously. The main function of a beam shaper is transformation of laser intensity profile, further beam transformation to provide optimum for a particular technique spot size and shape has to be realized by an imaging optical system which can include microscope objectives and tube lenses. This paper will describe design basics of refractive beam shapers and optical layouts of their applying in microscopy systems. Examples of real implementations and experimental results will be presented as well.
Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P
2016-03-01
Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.
Nonparametric triple collocation
USDA-ARS?s Scientific Manuscript database
Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
Research on modified the estimates of NOx emissions combined the OMI and ground-based DOAS technique
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Li*, Ang; Xie, Pinhua; Hu, Zhaokun; Wu, Fengcheng; Xu, Jin
2017-04-01
A new method to calibrate nitrogen dioxide (NO2) lifetimes and emissions from point sources using satellite measurements base on the mobile passive differential optical absorption spectroscopy (DOAS) and multi axis differential optical absorption spectroscopy (MAX-DOAS) is described. It is based on using the Exponentially-Modified Gaussian (EMG) fitting method to correct the line densities along the wind direction by fitting the mobile passive DOAS NO2 vertical column density (VCD). An effective lifetime and emission rate are then determined from the parameters of the fit. The obtained results were then compared with the results acquired by fitting OMI (Ozone Monitoring Instrument) NO2 using the above fitting method, the NOx emission rate was about 195.8mol/s, 160.6mol/s, respectively. The reason why the latter less than the former may be because the low spatial resolution of the satellite.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Analysis of optical scheme for medium-range directed energy laser weapon system
NASA Astrophysics Data System (ADS)
Jabczyński, Jan K.; Kaśków, Mateusz; Gorajek, Łukasz; Kopczyński, Krzysztof
2017-10-01
The relations between range of operation and aperture of laser weapon system were investigated, taking into account diffraction and technical limitations as beam quality, accuracy of point tracking, technical quality of optical train, etc. As a result for the medium ranges of 1 - 2 km we restricted the analysis to apertures not wider than 150 mm and the optical system without adaptive optics. To choose the best laser beam shape, the minimization of aperture losses and thermooptical effects inside optics as well as the effective width of laser beam in far field should be taken into account. We have analyzed theoretically such a problem for the group of a few most interesting from that point of view profiles including for reference two limiting cases of Gaussian beam and `top hat' profile. We have found that the most promising is the SuperGaussian profile of index p = 2 for which the surfaces of beam shaper elements can be manufactured in the acceptable cost-effective way and beam quality does not decrease noticeably. Further, we have investigated the thermo-optic effects on the far field parameters of Gaussian and `top hat' beams to determine the influence of absorption in optical elements on beam quality degradation. The simplified formulae were derived for beam quality measures (parameter M2 and Strehl ratio) which enables to estimate the influence of absorption losses on degradation of beam quality.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment ismore » achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.« less
Vanishing of local non-Gaussianity in canonical single field inflation
NASA Astrophysics Data System (ADS)
Bravo, Rafael; Mooij, Sander; Palma, Gonzalo A.; Pradenas, Bastián
2018-05-01
We study the production of observable primordial local non-Gaussianity in two opposite regimes of canonical single field inflation: attractor (standard single field slow-roll inflation) and non attractor (ultra slow-roll inflation). In the attractor regime, the standard derivation of the bispectrum's squeezed limit using co-moving coordinates gives the well known Maldacena's consistency relation fNL = 5 (1‑ns) / 12. On the other hand, in the non-attractor regime, the squeezed limit offers a substantial violation of this relation given by fNL = 5/2. In this work we argue that, independently of whether inflation is attractor or non-attractor, the size of the observable primordial local non-Gaussianity is predicted to be fNLobs = 0 (a result that was already understood to hold in the case of attractor models). To show this, we follow the use of the so-called Conformal Fermi Coordinates (CFC), recently introduced in the literature. These coordinates parametrize the local environment of inertial observers in a perturbed FRW spacetime, allowing one to identify and compute gauge invariant quantities, such as n-point correlation functions. Concretely, we find that during inflation, after all the modes have exited the horizon, the squeezed limit of the 3-point correlation function of curvature perturbations vanishes in the CFC frame, regardless of the inflationary regime. We argue that such a cancellation should persist after inflation ends.
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence
NASA Astrophysics Data System (ADS)
Gökçe, Muhsin Caner; Baykal, Yahya
2018-03-01
In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (
Second harmonic sound field after insertion of a biological tissue sample
NASA Astrophysics Data System (ADS)
Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo
2002-01-01
Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.; Feng, Heng
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Kong, Bo; Fox, Rodney O.; Feng, Heng; ...
2017-02-16
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Optimal focusing conditions of lenses using Gaussian beams
Franco, Juan Manuel; Cywiak, Moisés; Cywiak, David; ...
2016-04-02
By using the analytical equations of the propagation of Gaussian beams in which truncation exhibits negligible consequences, we describe a method that uses the value of the focal length of a focusing lens to classify its focusing performance. In this study, we show that for different distances between a laser and a focusing lens there are different planes where best focusing conditions can be obtained and we demonstrate how the value of the focal length impacts the lens focusing properties. To perform the classification we introduce the term delimiting focal length. As the value of the focal length used inmore » wave propagation theory is nominal and difficult to measure accurately, we describe an experimental approach to calculate its value matching our analytical description. Finally, we describe possible applications of the results for characterizing Gaussian sources, for measuring focal lengths and/or alternatively for characterizing piston-like movements.« less
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Preservation of Gaussian state entanglement in a quantum beat laser by reservoir engineering
NASA Astrophysics Data System (ADS)
Qurban, Misbah; Islam, Rameez ul; Ge, Guo-Qin; Ikram, Manzoor
2018-04-01
Quantum beat lasers have been considered as sources of entangled radiation in continuous variables such as Gaussian states. In order to preserve entanglement and to minimize entanglement degradation due to the system’s interaction with the surrounding environment, we propose to engineer environment modes through insertion of another system in between the laser resonator and the environment. This makes the environment surrounding the two-mode laser a structured reservoir. It not only enhances the entanglement among two modes of the laser but also preserves the entanglement for sufficiently longer times, a stringent requirement for quantum information processing tasks.
Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers
NASA Technical Reports Server (NTRS)
Blech, Richard A.
1989-01-01
The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.
Self-repeating properties of four-petal Gaussian vortex beams in quadratic index medium
NASA Astrophysics Data System (ADS)
Zou, Defeng; Li, Xiaohui; Chai, Tong; Zheng, Hairong
2018-05-01
In this paper, we investigate the propagation properties of four-petal Gaussian vortex (FPGV) beams propagating through the quadratic index medium, obtaining the analytical expression of FPGV beams. The effects of beam order n, topological charge m and beam waist ω0 are investigated. Results show that quadratic index medium support periodic distributions of FPGV beams. A hollow optical wall or an optical central principal maximum surrounded by symmetrical sidelobes will occur at the center of a period. At length, they will evolve into four petals structure, exactly same as the intensity distributions at source plane.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
NASA Astrophysics Data System (ADS)
Herbonnet, Ricardo; Buddendiek, Axel; Kuijken, Konrad
2017-03-01
Context. Current optical imaging surveys for cosmology cover large areas of sky. Exploiting the statistical power of these surveys for weak lensing measurements requires shape measurement methods with subpercent systematic errors. Aims: We introduce a new weak lensing shear measurement algorithm, shear nulling after PSF Gaussianisation (SNAPG), designed to avoid the noise biases that affect most other methods. Methods: SNAPG operates on images that have been convolved with a kernel that renders the point spread function (PSF) a circular Gaussian, and uses weighted second moments of the sources. The response of such second moments to a shear of the pre-seeing galaxy image can be predicted analytically, allowing us to construct a shear nulling scheme that finds the shear parameters for which the observed galaxies are consistent with an unsheared, isotropically oriented population of sources. The inverse of this nulling shear is then an estimate of the gravitational lensing shear. Results: We identify the uncertainty of the estimated centre of each galaxy as the source of noise bias, and incorporate an approximate estimate of the centroid covariance into the scheme. We test the method on extensive suites of simulated galaxies of increasing complexity, and find that it is capable of shear measurements with multiplicative bias below 0.5 percent.
Casas, F J; Pascual, J P; de la Fuente, M L; Artal, E; Portilla, J
2010-07-01
This paper describes a comparative nonlinear analysis of low-noise amplifiers (LNAs) under different stimuli for use in astronomical applications. Wide-band Gaussian-noise input signals, together with the high values of gain required, make that figures of merit, such as the 1 dB compression (1 dBc) point of amplifiers, become crucial in the design process of radiometric receivers in order to guarantee the linearity in their nominal operation. The typical method to obtain the 1 dBc point is by using single-tone excitation signals to get the nonlinear amplitude to amplitude (AM-AM) characteristic but, as will be shown in the paper, in radiometers, the nature of the wide-band Gaussian-noise excitation signals makes the amplifiers present higher nonlinearity than when using single tone excitation signals. Therefore, in order to analyze the suitability of the LNA's nominal operation, the 1 dBc point has to be obtained, but using realistic excitation signals. In this work, an analytical study of compression effects in amplifiers due to excitation signals composed of several tones is reported. Moreover, LNA nonlinear characteristics, as AM-AM, total distortion, and power to distortion ratio, have been obtained by simulation and measurement with wide-band Gaussian-noise excitation signals. This kind of signal can be considered as a limit case of a multitone signal, when the number of tones is very high. The work is illustrated by means of the extraction of realistic nonlinear characteristics, through simulation and measurement, of a 31 GHz back-end module LNA used in the radiometer of the QUIJOTE (Q U I JOint TEnerife) CMB experiment.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Fusion and Gaussian mixture based classifiers for SONAR data
NASA Astrophysics Data System (ADS)
Kotari, Vikas; Chang, KC
2011-06-01
Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.
Bessel-Gauss beams as rigorous solutions of the Helmholtz equation.
April, Alexandre
2011-10-01
The study of the nonparaxial propagation of optical beams has received considerable attention. In particular, the so-called complex-source/sink model can be used to describe strongly focused beams near the beam waist, but this method has not yet been applied to the Bessel-Gauss (BG) beam. In this paper, the complex-source/sink solution for the nonparaxial BG beam is expressed as a superposition of nonparaxial elegant Laguerre-Gaussian beams. This provides a direct way to write the explicit expression for a tightly focused BG beam that is an exact solution of the Helmholtz equation. It reduces correctly to the paraxial BG beam, the nonparaxial Gaussian beam, and the Bessel beam in the appropriate limits. The analytical expression can be used to calculate the field of a BG beam near its waist, and it may be useful in investigating the features of BG beams under tight focusing conditions.
Cardinal and anti-cardinal points, equalities and chromatic dependence.
Evans, Tanya; Harris, William F
2017-05-01
Cardinal points are used for ray tracing through Gaussian systems. Anti-principal and anti-nodal points (which we shall refer to as the anti-cardinal points), along with the six familiar cardinal points, belong to a much larger set of special points. The purpose of this paper is to obtain a set of relationships and resulting equalities among the cardinal and anti-cardinal points and to illustrate them using Pascal's ring. The methodology used relies on Gaussian optics and the transference T. We make use of two equations, obtained via the transference, which give the locations of the six cardinal and four anti-cardinal points with respect to the system. We obtain equalities among the cardinal and anti-cardinal points. We utilise Pascal's ring to illustrate which points depend on frequency and their displacement with change in frequency. Pascal described a memory schema in the shape of a hexagon for remembering equalities among the points and illustrating shifts in these points when an aspect of the system changes. We modify and extend Pascal's ring to include the anti-cardinal points. We make use of Pascal's ring extended to illustrate which points are dependent on the frequency of light and the direction of shift of the equalities with change in frequency. For the reduced eye the principal and nodal points are independent of frequency, but the focal points and the anti-cardinal points depend on frequency. For Le Grand's four-surface model eye all six cardinal and four anti-cardinal points depend on frequency. This has implications for definitions, particularly of chromatic aberrations of the eye, that make use of cardinal points and that themselves depend on frequency. Pascal's ring and Pascal's ring extended are novel memory schema for remembering the equalities among the cardinal and anti-cardinal points. The rings are useful for illustrating changes among the equalities and direction of shift of points when an aspect of a system changes. Care should be taken when defining concepts that rely on cardinal points that depend on frequency. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Goodwell, Allison E.; Kumar, Praveen
2017-07-01
Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2006-11-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2010-06-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty
NASA Astrophysics Data System (ADS)
Ohmi, Masataro; Mori, Hiroyuki
In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
NASA Astrophysics Data System (ADS)
Gaztanaga, Enrique; Fosalba, Pablo
1998-12-01
In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Generating scale-invariant perturbations from rapidly-evolving equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khoury, Justin; Steinhardt, Paul J.
2011-06-15
Recently, we introduced an ekpyrotic model based on a single, canonical scalar field that generates nearly scale-invariant curvature fluctuations through a purely ''adiabatic mechanism'' in which the background evolution is a dynamical attractor. Despite the starkly different physical mechanism for generating fluctuations, the two-point function is identical to inflation. In this paper, we further explore this concept, focusing in particular on issues of non-Gaussianity and quantum corrections. We find that the degeneracy with inflation is broken at three-point level: for the simplest case of an exponential potential, the three-point amplitude is strongly scale dependent, resulting in a breakdown of perturbationmore » theory on small scales. However, we show that the perturbative breakdown can be circumvented--and all issues raised in Linde et al. (arXiv:0912.0944) can be addressed--by altering the potential such that power is suppressed on small scales. The resulting range of nearly scale-invariant, Gaussian modes can be as much as 12 e-folds, enough to span the scales probed by microwave background and large-scale structure observations. On smaller scales, the spectrum is not scale invariant but is observationally acceptable.« less
Term Cancellations in Computing Floating-Point Gröbner Bases
NASA Astrophysics Data System (ADS)
Sasaki, Tateaki; Kako, Fujio
We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.
Topological features of vector vortex beams perturbed with uniformly polarized light
D’Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-01
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams. PMID:28079134
Anomalous polymer collapse winding angle distributions
NASA Astrophysics Data System (ADS)
Narros, A.; Owczarek, A. L.; Prellberg, T.
2018-03-01
In two dimensions polymer collapse has been shown to be complex with multiple low temperature states and multi-critical points. Recently, strong numerical evidence has been provided for a long-standing prediction of universal scaling of winding angle distributions, where simulations of interacting self-avoiding walks show that the winding angle distribution for N-step walks is compatible with the theoretical prediction of a Gaussian with a variance growing asymptotically as Clog N . Here we extend this work by considering interacting self-avoiding trails which are believed to be a model representative of some of the more complex behaviour. We provide robust evidence that, while the high temperature swollen state of this model has a winding angle distribution that is also Gaussian, this breaks down at the polymer collapse point and at low temperatures. Moreover, we provide some evidence that the distributions are well modelled by stretched/compressed exponentials, in contradistinction to the behaviour found in interacting self-avoiding walks. Dedicated to Professor Stu Whittington on the occasion of his 75th birthday.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
Topological features of vector vortex beams perturbed with uniformly polarized light
NASA Astrophysics Data System (ADS)
D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-01
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.
Topological features of vector vortex beams perturbed with uniformly polarized light.
D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-12
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell's equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.
NASA Astrophysics Data System (ADS)
Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.
1995-05-01
Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.
NASA Astrophysics Data System (ADS)
Campos, Carmina del Rio; Horche, Paloma R.; Martin-Minguez, Alfredo
2011-03-01
Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML-fiber couple.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos; Koutalonis, Ioannis; Moisidi, Margarita; Chatzopoulos, Georgios
2018-05-01
In this work we study in terms of Tsallis statistical mechanics the properties of microtremors' fluctuations in two church bell towers, which are monuments of cultural heritage, in the city of Chania (Crete, Greece). We have shown that fluctuations of ambient vibrations recordings in the Church tower bells follow a q-Gaussian distribution. The behavior of Tsallis q parameter with the level (high) of the measuring point within the tower and the amplification factors at that points as extracted from horizontal-to-vertical (HVSR) spectral ratios are presented and discussed. Since q decreases as the amplification factor increases, we could suggest q as a vulnerability index, where, as q decreases approaching unity, then the structural system is getting more vulnerable. The latter approach suggests that introducing ideas of Tsallis statistics could be useful in characterizing extremely complex processes as that governed the estimation of seismic vulnerability in which a multidisciplinary approach is required.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratiomore » (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.« less
Atmospheric aerosol composition and source apportionments to aerosol in southern Taiwan
NASA Astrophysics Data System (ADS)
Tsai, Ying I.; Chen, Chien-Lung
In this study, the chemical characteristics of winter aerosol at four sites in southern Taiwan were determined and the Gaussian Trajectory transfer coefficient model (GTx) was then used to identify the major air pollutant sources affecting the study sites. Aerosols were found to be acidic at all four sites. The most important constituents of the particulate matter (PM) by mass were SO 42-, organic carbon (OC), NO 3-, elemental carbon (EC) and NH 4+, with SO 42-, NO 3-, and NH 4+ together constituting 86.0-87.9% of the total PM 2.5 soluble inorganic salts and 68.9-78.3% of the total PM 2.5-10 soluble inorganic salts, showing that secondary photochemical solution components such as these were the major contributors to the aerosol water-soluble ions. The coastal site, Linyuan (LY), had the highest PM mass percentage of sea salts, higher in the coarse fraction, and higher sea salts during daytime than during nighttime, indicating that the prevailing daytime sea breeze brought with it more sea-salt aerosol. Other than sea salts, crustal matter, and EC in PM 2.5 at Jenwu (JW) and in PM 2.5-10 at LY, all aerosol components were higher during nighttime, due to relatively low nighttime mixing heights limiting vertical and horizontal dispersion. At JW, a site with heavy traffic loadings, the OC/EC ratio in the nighttime fine and coarse fractions of approximately 2.2 was higher than during daytime, indicating that in addition to primary organic aerosol (POA), secondary organic aerosol (SOA) also contributed to the nighttime PM 2.5. This was also true of the nighttime coarse fraction at LY. The GTx produced correlation coefficients ( r) for simulated and observed daily concentrations of PM 10 at the four sites (receptors) in the range 0.45-0.59 and biases from -6% to -20%. Source apportionment indicated that point sources were the largest PM 10 source at JW, LY and Daliao (DL), while at Meinung (MN), a suburban site with less local PM 10, SO x and NO x emissions, upwind boundary concentration was the major PM 10 source, followed by point sources and top boundary concentration.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
Circularly symmetric cusped random beams in free space and atmospheric turbulence.
Wang, Fei; Korotkova, Olga
2017-03-06
A class of random stationary, scalar sources producing cusped average intensity profiles (i.e. profiles with concave curvature) in the far field is introduced by modeling the source degree of coherence as a Fractional Multi-Gaussian-correlated Schell-Model (FMGSM) function with rotational symmetry. The average intensity (spectral density) generated by such sources is investigated on propagation in free space and isotropic and homogeneous atmospheric turbulence. It is found that the FMGSM beam can retain the cusped shape on propagation at least in weak or moderate turbulence regimes; however, strong turbulence completely suppresses the cusped intensity profile. Under the same atmospheric conditions the spectral density of the FMGSM beam at the receiver is found to be much higher than that of the conventional Gaussian Schell-model (GSM) beam within the narrow central area, implying that for relatively small collecting apertures the power-in-bucket of the FMGSM beam is higher than that of the GSM beam. Our results are of importance to energy delivery, Free-Space Optical communications and imaging in the atmosphere.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Numerical modeling of laser assisted tape winding process
NASA Astrophysics Data System (ADS)
Zaami, Amin; Baran, Ismet; Akkerman, Remko
2017-10-01
Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.
The abundance of Galactic planets from OGLE-III 2002 microlensing data
NASA Astrophysics Data System (ADS)
Snodgrass, Colin; Horne, Keith; Tsapras, Yiannis
2004-07-01
From the 389 OGLE-III 2002 observations of Galactic bulge microlensing events, we select 321 that are well described by a point-source point-lens light-curve model. From this sample we identify one event, 2002-BLG-055, that we regard as a strong planetary lensing candidate, and another, 2002-BLG-140, that is a possible candidate. If each of the 321 lens stars has one planet with a mass ratio q = m/M = 10-3 and orbit radius a = RE, the Einstein ring radius, analysis of detection efficiencies indicates that 14 planets should have been detectable with Δχ2 > 25. Assuming our candidate is due to planetary lensing, then the abundance of planets with q = 10-3 and a = RE is np ~ n/14 = 7 per cent. Conversion to physical units (Jupiter masses, MJup, and astronomical units, au) gives the abundance of `cool Jupiters' (m ~ MJup, a ~ 4 au) per lens star as np ~ n/5.5 = 18 per cent. The detection probability scales roughly with q and (Δχ2)-1/2, and drops off from a peak at a ~ 4 au like a Gaussian with a dispersion of 0.4 dex.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
Self-Sustained Ultrafast Pulsation in Coupled VCSELs
NASA Technical Reports Server (NTRS)
Ning, Cun-Zheng
2001-01-01
High frequency, narrow-band self-pulsating operation is demonstrated in two coupled vertical-cavity surface-emitting lasers (VCSELs). The coupled VCSELs provide an ideal source for high-repetition rate (over 40 GHz), sinusoidal-like modulated laser source with Gaussian-like near- and far-field profiles. We also show that the frequency of the modulation can be tuned by the inter-VCSEL separation or by DC-bias level.
The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments
NASA Astrophysics Data System (ADS)
Chen, Fajing; Jiao, Meiyan; Chen, Jing
2013-04-01
Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.
Mapping of all polarization-singularity C-point morphologies
NASA Astrophysics Data System (ADS)
Galvez, E. J.; Rojec, B. L.; Beach, K.
2014-02-01
We present theoretical descriptions and measurements of optical beams carrying isolated polarization-singularity C-points. Our analysis covers all types of C-points, including asymmetric lemons, stars and monstars. They are formed by the superposition of a circularly polarized mode carrying an optical vortex and a fundamental Gaussian mode in the opposite state of polarization. The type of C-point can be controlled experimentally by varying two parameters controlling the asymmetry of the optical vortex. This was implemented via a superposition of modes with singly charged optical vortices of opposite sign, and varying the relative amplitude and phase. The results are in excellent agreement with the predictions.
A portable high power microwave source with permanent magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wei; Zhang, Jun; Li, Zhi-qiang
A high power microwave source with permanent magnets is proposed in this paper. The source has the length 330 mm, maximum diameter 350 mm, and total weight 50 kg, including 25 kg of permanent magnets. 1 GW of microwave power with Gaussian radiation pattern and 24% of microwave power generation efficiency in a pulse duration of 75 ns are obtained in the experiment. Operating frequency of the source is 2.32 GHz. Such a small size, light weight, and highly stable in operation source will be used in portable repetitive high power microwave generation systems.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
Gaussian-modulated coherent-state measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei
2014-04-01
Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Block Iterative Methods for Elliptic and Parabolic Difference Equations.
1981-09-01
S V PARTER, M STEUERWALT N0OO14-7A-C-0341 UNCLASSIFIED CSTR -447 NL ENN.EEEEEN LLf SCOMPUTER SCIENCES c~DEPARTMENT SUniversity of Wisconsin- SMadison...suggests that iterative algorithms that solve for several points at once will converge more rapidly than point algorithms . The Gaussian elimination... algorithm is seen in this light to converge in one step. Frankel [14], Young [34], Arms, Gates, and Zondek [1], and Varga [32], using the algebraic structure
Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso
2017-03-15
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2016-08-01
In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser tweezers, tractor beams, optical spanners, arbitrary scattering, radiation force, angular momentum, and torque in particle manipulation, and other related topics.
Propagation of Gaussian wave packets in complex media and application to fracture characterization
NASA Astrophysics Data System (ADS)
Ding, Yinshuai; Zheng, Yingcai; Zhou, Hua-Wei; Howell, Michael; Hu, Hao; Zhang, Yu
2017-08-01
Knowledge of the subsurface fracture networks is critical in probing the tectonic stress states and flow of fluids in reservoirs containing fractures. We propose to characterize fractures using scattered seismic data, based on the theory of local plane-wave multiple scattering in a fractured medium. We construct a localized directional wave packet using point sources on the surface and propagate it toward the targeted subsurface fractures. The wave packet behaves as a local plane wave when interacting with the fractures. The interaction produces multiple scattering of the wave packet that eventually travels up to the surface receivers. The propagation direction and amplitude of the multiply scattered wave can be used to characterize fracture density, orientation and compliance. Two key aspects in this characterization process are the spatial localization and directionality of the wave packet. Here we first show the physical behaviour of a new localized wave, known as the Gaussian Wave Packet (GWP), by examining its analytical solution originally formulated for a homogenous medium. We then use a numerical finite-difference time-domain (FDTD) method to study its propagation behaviour in heterogeneous media. We find that a GWP can still be localized and directional in space even over a large propagation distance in heterogeneous media. We then propose a method to decompose the recorded seismic wavefield into GWPs based on the reverse-time concept. This method enables us to create a virtually recorded seismic data using field shot gathers, as if the source were an incident GWP. Finally, we demonstrate the feasibility of using GWPs for fracture characterization using three numerical examples. For a medium containing fractures, we can reliably invert for the local parameters of multiple fracture sets. Differing from conventional seismic imaging such as migration methods, our fracture characterization method is less sensitive to errors in the background velocity model. For a layered medium containing fractures, our method can correctly recover the fracture density even with an inaccurate velocity model.
Primordial non-Gaussianity and reionization
NASA Astrophysics Data System (ADS)
Lidz, Adam; Baxter, Eric J.; Adshead, Peter; Dodelson, Scott
2013-07-01
The statistical properties of the primordial perturbations contain clues about their origins. Although the Planck collaboration has recently obtained tight constraints on primordial non-Gaussianity from cosmic microwave background measurements, it is still worthwhile to mine upcoming data sets in an effort to place independent or competitive limits. The ionized bubbles that formed at redshift z˜6-20 during the epoch of reionization were seeded by primordial overdensities, and so the statistics of the ionization field at high redshift are related to the statistics of the primordial field. Here we model the effect of primordial non-Gaussianity on the reionization field. The epoch and duration of reionization are affected, as are the sizes of the ionized bubbles, but these changes are degenerate with variations in the properties of the ionizing sources and the surrounding intergalactic medium. A more promising signature is the power spectrum of the spatial fluctuations in the ionization field, which may be probed by upcoming 21 cm surveys. This has the expected 1/k2 dependence on large scales, characteristic of a biased tracer of the matter field. We project how well upcoming 21 cm observations will be able to disentangle this signal from foreground contamination. Although foreground cleaning inevitably removes the large-scale modes most impacted by primordial non-Gaussianity, we find that primordial non-Gaussianity can be separated from foreground contamination for a narrow range of length scales. In principle, futuristic redshifted 21 cm surveys may allow constraints competitive with Planck.
Quantum state engineering of light with continuous-wave optical parametric oscillators.
Morin, Olivier; Liu, Jianli; Huang, Kun; Barbosa, Felippe; Fabre, Claude; Laurat, Julien
2014-05-30
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics(1,2). Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems(3). We focus here on the use of a continuous-wave optical parametric oscillator(3,4). This system is based on a non-linear χ(2) crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching. Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states(5). Generating directly such states is a difficult task and would require strong χ(3) non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
All-semiconductor high-speed akinetic swept-source for OCT
NASA Astrophysics Data System (ADS)
Minneman, Michael P.; Ensher, Jason; Crawford, Michael; Derickson, Dennis
2011-12-01
A novel swept-wavelength laser for optical coherence tomography (OCT) using a monolithic semiconductor device with no moving parts is presented. The laser is a Vernier-Tuned Distributed Bragg Reflector (VT-DBR) structure exhibiting a single longitudinal mode. All-electronic wavelength tuning is achieved at a 200 kHz sweep repetition rate, 20 mW output power, over 100 nm sweep width and coherence length longer than 40 mm. OCT point-spread functions with 45- 55 dB dynamic range are demonstrated; lasers at 1550 nm, and now 1310 nm, have been developed. Because the laser's long-term tuning stability allows for electronic sample trigger generation at equal k-space intervals (electronic k-clock), the laser does not need an external optical k-clock for measurement interferometer sampling. The non-resonant, allelectronic tuning allows for continuously adjustable sweep repetition rates from mHz to 100s of kHz. Repetition rate duty cycles are continuously adjustable from single-trigger sweeps to over 99% duty cycle. The source includes a monolithically integrated power leveling feature allowing flat or Gaussian power vs. wavelength profiles. Laser fabrication is based on reliable semiconductor wafer-scale processes, leading to low and rapidly decreasing cost of manufacture.
Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.
2012-01-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
Lee, Jaebeom; Lee, Young-Joo
2018-01-01
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421
Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo
2018-05-09
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca
Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less
Regulator dependence of fixed points in quantum Einstein gravity with R 2 truncation
NASA Astrophysics Data System (ADS)
Nagy, S.; Fazekas, B.; Peli, Z.; Sailer, K.; Steib, I.
2018-03-01
We performed a functional renormalization group analysis for the quantum Einstein gravity including a quadratic term in the curvature. The ultraviolet non-gaussian fixed point and its critical exponent for the correlation length are identified for different forms of regulators in case of dimension 3. We searched for that optimized regulator where the physical quantities show the least regulator parameter dependence. It is shown that the Litim regulator satisfies this condition. The infrared fixed point has also been investigated, it is found that the exponent is insensitive to the third coupling introduced by the R 2 term.
NASA Astrophysics Data System (ADS)
Peng, Juan; Zhang, Li; Zhang, Kecheng; Ma, Junxian
2018-07-01
Based on the Rytov approximation theory, the transmission model of an orbital angular momentum (OAM)-carrying partially coherent Bessel-Gaussian (BG) beams propagating in weak anisotropic turbulence is established. The corresponding analytical expression of channel capacity is presented. Influences of anisotropic turbulence parameters and beam parameters on channel capacity of OAM-based free-space optical (FSO) communication systems are discussed in detail. The results indicate channel capacity increases with increasing of almost all of the parameters except for transmission distance. Raising the values of some parameters such as wavelength, propagation altitude and non-Kolmogorov power spectrum index, would markedly improve the channel capacity. In addition, we evaluate the channel capacity of Laguerre-Gaussian (LG) beams and partially coherent BG beams in anisotropic turbulence. It indicates that partially coherent BG beams are better light sources candidates for mitigating the influences of anisotropic turbulence on channel capacity of OAM-based FSO communication systems.
Cosmic microwave background trispectrum and primordial magnetic field limits.
Trivedi, Pranjal; Seshadri, T R; Subramanian, Kandaswamy
2012-06-08
Primordial magnetic fields will generate non-gaussian signals in the cosmic microwave background (CMB) as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. We compute a new measure of magnetic non-gaussianity, the CMB trispectrum, on large angular scales, sourced via the Sachs-Wolfe effect. The trispectra induced by magnetic energy density and by magnetic scalar anisotropic stress are found to have typical magnitudes of approximately a few times 10(-29) and 10(-19), respectively. Observational limits on CMB non-gaussianity from WMAP data allow us to conservatively set upper limits of a nG, and plausibly sub-nG, on the present value of the primordial cosmic magnetic field. This represents the tightest limit so far on the strength of primordial magnetic fields, on Mpc scales, and is better than limits from the CMB bispectrum and all modes in the CMB power spectrum. Thus, the CMB trispectrum is a new and more sensitive probe of primordial magnetic fields on large scales.
Computer Analysis of Air Pollution from Highways, Streets, and Complex Interchanges
DOT National Transportation Integrated Search
1974-03-01
A detailed computer analysis of air quality for a complex highway interchange was prepared, using an in-house version of the Environmental Protection Agency's Gaussian Highway Line Source Model. This analysis showed that the levels of air pollution n...
Ignition of Cellulosic Paper at Low Radiant Fluxes
NASA Technical Reports Server (NTRS)
White, K. Alan
1996-01-01
The ignition of cellulosic paper by low level thermal radiation is investigated. Past work on radiative ignition of paper is briefly reviewed. No experimental study has been reported for radiative ignition of paper at irradiances below 10 Watts/sq.cm. An experimental study of radiative ignition of paper at these low irradiances is reported. Experimental parameters investigated and discussed include radiant power levels incident on the sample, the method of applying the radiation (focussed vs. diffuse Gaussian source), the presence and relative position of a separate pilot ignition source, and the effects of natural convection (buoyancy) on the ignition process in a normal gravity environment. It is observed that the incident radiative flux (in W/sq.cm) has the greatest influence on ignition time. For a given flux level, a focussed Gaussian source is found to be advantageous to a more diffuse, lower amplitude, thermal source. The precise positioning of a pilot igniter relative to gravity and to the fuel sample affects the ignition process, but the precise effects are not fully understood. Ignition was more readily achieved and sustained with a horizontal fuel sample, indicating the buoyancy plays a role in the ignition process of cellulosic paper. Smoldering combustion of doped paper samples was briefly investigated, and results are discussed.
Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong
2014-01-01
The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110° to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 µg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging. PMID:24770916
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong; Xing, Lei
2014-05-01
The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110(°) to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 μg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging.
NASA Astrophysics Data System (ADS)
Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.
2006-11-01
The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n =64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs.
Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.
2007-01-01
The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15 kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n=64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs. PMID:17115732
Li, Ye; Yu, Lin; Zhang, Yixin
2017-05-29
Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.
Mitri, F G; Fellah, Z E A
2014-01-01
The present analysis investigates the (axial) acoustic radiation force induced by a quasi-Gaussian beam centered on an elastic and a viscoelastic (polymer-type) sphere in a nonviscous fluid. The quasi-Gaussian beam is an exact solution of the source free Helmholtz wave equation and is characterized by an arbitrary waist w₀ and a diffraction convergence length known as the Rayleigh range z(R). Examples are found where the radiation force unexpectedly approaches closely to zero at some of the elastic sphere's resonance frequencies for kw₀≤1 (where this range is of particular interest in describing strongly focused or divergent beams), which may produce particle immobilization along the axial direction. Moreover, the (quasi)vanishing behavior of the radiation force is found to be correlated with conditions giving extinction of the backscattering by the quasi-Gaussian beam. Furthermore, the mechanism for the quasi-zero force is studied theoretically by analyzing the contributions of the kinetic, potential and momentum flux energy densities and their density functions. It is found that all the components vanish simultaneously at the selected ka values for the nulls. However, for a viscoelastic sphere, acoustic absorption degrades the quasi-zero radiation force. Copyright © 2013 Elsevier B.V. All rights reserved.
Conformal geodesics in spherically symmetric vacuum spacetimes with cosmological constant
NASA Astrophysics Data System (ADS)
García-Parrado Gómez-Lobo, A.; Gasperín, E.; Valiente Kroon, J. A.
2018-02-01
An analysis of conformal geodesics in the Schwarzschild–de Sitter and Schwarzschild–anti-de Sitter families of spacetimes is given. For both families of spacetimes we show that initial data on a spacelike hypersurface can be given such that the congruence of conformal geodesics arising from this data cover the whole maximal extension of canonical conformal representations of the spacetimes without forming caustic points. For the Schwarzschild–de Sitter family, the resulting congruence can be used to obtain global conformal Gaussian systems of coordinates of the conformal representation. In the case of the Schwarzschild–anti-de Sitter family, the natural parameter of the curves only covers a restricted time span so that these global conformal Gaussian systems do not exist.
Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution
Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry
2014-01-01
One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718
Off-axis points encoding/decoding with orbital angular momentum spectrum
Chu, Jiaqi; Chu, Daping; Smithwitck, Quinn
2017-01-01
Encoding/decoding off-axis points with discrete orbital angular momentum (OAM) modes is investigated. On-axis Laguerre-Gaussian (LG) beams are expanded into off-axis OAM spectra, with which off-axis points are encoded. The influence of the mode and the displacement of the LG beam on the spread of the OAM spectrum is analysed. The results show that not only the conventional on-axis point, but also off-axis points, can be encoded and decoded with OAM of light. This is confirmed experimentally. The analytical result here provides a solid foundation to use OAM modes to encode two-dimensional high density information for multiplexing and to analyse the effect of mis-alignment in practical OAM applications. PMID:28272543
Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.
2017-03-01
Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.
NASA Astrophysics Data System (ADS)
Wang, W. B.; Gozali, Richard; Nguyen, Thien An; Alfano, R. R.
2015-03-01
Light scattering and transmission of optical Laguerre Gaussian (LG) vortex beams with different orbital angular momentum (OAM) states in turbid scattering media were investigated in comparison with Gaussian (G) beam. The scattering media used in the experiments consist of various sizes and concentrations of latex beads in water solutions. The LG beams were generated using a spatial light modulator in reflection mode. The ballistic transmissions of LG and G beams were measured with different ratios of thickness of samples (z) to scattering mean free path (ls) of the turbid media, z/ls. The results show that in the ballistic region where z/ls is small, the LG and G beams show no significant difference, while in the diffusive region where z/ls is large, LG beams show higher transmission than Gaussian beam. In the diffusive region, the LG beams with higher orbital angular momentum L values show higher transmission than the beams with lower L values. The transition points from ballistic to diffusive regions for different scattering media were studied and determined.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Gaussian process based intelligent sampling for measuring nano-structure surfaces
NASA Astrophysics Data System (ADS)
Sun, L. J.; Ren, M. J.; Yin, Y. H.
2016-09-01
Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
Yu, Jen-Shiang K; Yu, Chin-Hui
2002-01-01
One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.
Hawkley, Gavin
2014-12-01
Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.
A routinely applied atmospheric dispersion model was modified to evaluate alternative modeling techniques which allowed for more detailed source data, onsite meteorological data, and several dispersion methodologies. These were evaluated with hourly SO2 concentrations measured at...
Software Applications on the Peregrine System | High-Performance Computing
programming and optimization. Gaussian Chemistry Program for calculating molecular electronic structure and Materials Science Open-source classical molecular dynamics program designed for massively parallel systems framework Q-Chem Chemistry ab initio quantum chemistry package for predictin molecular structures
Freeze-out dynamics via charged kaon femtoscopy in sNN=200 GeV central Au + Au collisions
NASA Astrophysics Data System (ADS)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Alekseev, I.; Alford, J.; Anson, C. D.; Aparin, A.; Arkhipkin, D.; Aschenauer, E.; Averichev, G. S.; Balewski, J.; Banerjee, A.; Barnovska, Z.; Beavis, D. R.; Bellwied, R.; Betancourt, M. J.; Betts, R. R.; Bhasin, A.; Bhati, A. K.; Bhattarai; Bichsel, H.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Borowski, W.; Bouchet, J.; Brandin, A. V.; Brovko, S. G.; Bruna, E.; Bültmann, S.; Bunzarov, I.; Burton, T. P.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Cebra, D.; Cendejas, R.; Cervantes, M. C.; Chaloupka, P.; Chang, Z.; Chattopadhyay, S.; Chen, H. F.; Chen, J. H.; Chen, J. Y.; Chen, L.; Cheng, J.; Cherney, M.; Chikanian, A.; Christie, W.; Chung, P.; Chwastowski, J.; Codrington, M. J. M.; Corliss, R.; Cramer, J. G.; Crawford, H. J.; Cui, X.; Das, S.; Davila Leyva, A.; De Silva, L. C.; Debbe, R. R.; Dedovich, T. G.; Deng, J.; Derradi de Souza, R.; Dhamija, S.; di Ruzza, B.; Didenko, L.; Dilks; Ding, F.; Dion, A.; Djawotho, P.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, C. M.; Dunkelberger, L. E.; Dunlop, J. C.; Efimov, L. G.; Elnimr, M.; Engelage, J.; Engle, K. S.; Eppley, G.; Eun, L.; Evdokimov, O.; Fatemi, R.; Fazio, S.; Fedorisin, J.; Fersch, R. G.; Filip, P.; Finch, E.; Fisyak, Y.; Flores, C. E.; Gagliardi, C. A.; Gangadharan, D. R.; Garand, D.; Geurts, F.; Gibson, A.; Gliske, S.; Grebenyuk, O. G.; Grosnick, D.; Guo, Y.; Gupta, A.; Gupta, S.; Guryn, W.; Haag, B.; Hajkova, O.; Hamed, A.; Han, L.-X.; Haque, R.; Harris, J. W.; Hays-Wehle, J. P.; Heppelmann, S.; Hirsch, A.; Hoffmann, G. W.; Hofman, D. J.; Horvat, S.; Huang, B.; Huang, H. Z.; Huck, P.; Humanic, T. J.; Igo, G.; Jacobs, W. W.; Jena, C.; Judd, E. G.; Kabana, S.; Kang, K.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Kesich, A.; Kikola, D. P.; Kiryluk, J.; Kisel, I.; Kisiel, A.; Koetke, D. D.; Kollegger, T.; Konzer, J.; Koralt, I.; Korsch, W.; Kotchenda, L.; Kravtsov, P.; Krueger, K.; Kulakov, I.; Kumar, L.; Kycia, R. A.; Lamont, M. A. C.; Landgraf, J. M.; Landry, K. D.; LaPointe, S.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Leight, W.; LeVine, M. J.; Li, C.; Li, W.; Li, X.; Li, X.; Li, Y.; Li, Z. M.; Lima, L. M.; Lisa, M. A.; Liu, F.; Ljubicic, T.; Llope, W. J.; Longacre, R. S.; Luo, X.; Ma, G. L.; Ma, Y. G.; Madagodagettige Don, D. M. M. D.; Mahapatra, D. P.; Majka, R.; Margetis, S.; Markert, C.; Masui, H.; Matis, H. S.; McDonald, D.; McShane, T. S.; Mioduszewski, S.; Mitrovski, M. K.; Mohammed, Y.; Mohanty, B.; Mondal, M. M.; Munhoz, M. G.; Mustafa, M. K.; Naglis, M.; Nandi, B. K.; Nasim, Md.; Nayak, T. K.; Nelson, J. M.; Nogach, L. V.; Novak, J.; Odyniec, G.; Ogawa, A.; Oh, K.; Ohlson, A.; Okorokov, V.; Oldag, E. W.; Oliveira, R. A. N.; Olson, D.; Pachr, M.; Page, B. S.; Pal, S. K.; Pan, Y. X.; Pandit, Y.; Panebratsev, Y.; Pawlak, T.; Pawlik, B.; Pei, H.; Perkins, C.; Peryt, W.; Pile, P.; Planinic, M.; Pluta, J.; Plyku, D.; Poljak, N.; Porter, J.; Poskanzer, A. M.; Powell, C. B.; Pruneau, C.; Pruthi, N. K.; Przybycien, M.; Pujahari, P. R.; Putschke, J.; Qiu, H.; Ramachandran, S.; Raniwala, R.; Raniwala, S.; Ray, R. L.; Riley, C. K.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Ross, J. F.; Roy, A.; Ruan, L.; Rusnak, J.; Sahoo, N. R.; Sahu, P. K.; Sakrejda, I.; Salur, S.; Sandacz, A.; Sandweiss, J.; Sangaline, E.; Sarkar, A.; Schambach, J.; Scharenberg, R. P.; Schmah, A. M.; Schmidke, B.; Schmitz, N.; Schuster, T. R.; Seger, J.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shao, M.; Sharma, B.; Sharma, M.; Shen, W. Q.; Shi, S. S.; Shou, Q. Y.; Sichtermann, E. P.; Singaraju, R. N.; Skoby, M. J.; Smirnov, D.; Smirnov, N.; Solanki, D.; Sorensen, P.; deSouza, U. G.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Stevens, J. R.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Suaide, A. A. P.; Suarez, M. C.; Sumbera, M.; Sun, X. M.; Sun, Y.; Sun, Z.; Surrow, B.; Svirida, D. N.; Symons, T. J. M.; Szanto de Toledo, A.; Takahashi, J.; Tang, A. H.; Tang, Z.; Tarini, L. H.; Tarnowsky, T.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Trzeciak, B. A.; Tsai, O. D.; Turnau, J.; Ullrich, T.; Underwood, D. G.; Van Buren, G.; van Nieuwenhuizen, G.; Vanfossen, J. A., Jr.; Varma, R.; Vasconcelos, G. M. S.; Vertesi, R.; Videbæk, F.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wada, M.; Walker, M.; Wang, F.; Wang, G.; Wang, H.; Wang, J. S.; Wang, Q.; Wang, X. L.; Wang, Y.; Webb, G.; Webb, J. C.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y. F.; Xiao, Z.; Xie, W.; Xin, K.; Xu, H.; Xu, N.; Xu, Q. H.; Xu, W.; Xu, Y.; Xu, Z.; Yan; Yang, C.; Yang, Y.; Yang, Y.; Yepes, P.; Yi, L.; Yip, K.; Yoo, I.-K.; Zawisza, Y.; Zbroszczyk, H.; Zha, W.; Zhang, J. B.; Zhang, S.; Zhang, X. P.; Zhang, Y.; Zhang, Z. P.; Zhao, F.; Zhao, J.; Zhong, C.; Zhu, X.; Zhu, Y. H.; Zoulkarneeva, Y.; Zyzak, M.
2013-09-01
We present measurements of three-dimensional correlation functions of like-sign, low-transverse-momentum kaon pairs from sNN=200 GeV Au+Au collisions. A Cartesian surface-spherical harmonic decomposition technique was used to extract the kaon source function. The latter was found to have a three-dimensional Gaussian shape and can be adequately reproduced by Therminator event-generator simulations with resonance contributions taken into account. Compared to the pion one, the kaon source function is generally narrower and does not have the long tail along the pair transverse momentum direction. The kaon Gaussian radii display a monotonic decrease with increasing transverse mass mT over the interval of 0.55≤mT≤1.15 GeV/c2. While the kaon radii are adequately described by the mT -scaling in the outward and sideward directions, in the longitudinal direction the lowest mT value exceeds the expectations from a pure hydrodynamical model prediction.
NASA Astrophysics Data System (ADS)
Watson, C.; Devine, Kathryn; Quintanar, N.; Candelaria, T.
2016-02-01
We survey 44 young stellar objects located near the edges of mid-IR-identified bubbles in CS (1-0) using the Green Bank Telescope. We detect emission in 18 sources, indicating young protostars that are good candidates for being triggered by the expansion of the bubble. We calculate CS column densities and abundances. Three sources show evidence of infall through non-Gaussian line-shapes. Two of these sources are associated with dark clouds and are promising candidates for further exploration of potential triggered star formation. We obtained on-the-fly maps in CS (1-0) of three sources, showing evidence of significant interactions between the sources and the surrounding environment.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction
NASA Astrophysics Data System (ADS)
Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying
2018-06-01
Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
Hessian eigenvalue distribution in a random Gaussian landscape
NASA Astrophysics Data System (ADS)
Yamada, Masaki; Vilenkin, Alexander
2018-03-01
The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/ N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.
Generalized Gaussian wave packet dynamics: Integrable and chaotic systems.
Pal, Harinder; Vyas, Manan; Tomsovic, Steven
2016-01-01
The ultimate semiclassical wave packet propagation technique is a complex, time-dependent Wentzel-Kramers-Brillouin method known as generalized Gaussian wave packet dynamics (GGWPD). It requires overcoming many technical difficulties in order to be carried out fully in practice. In its place roughly twenty years ago, linearized wave packet dynamics was generalized to methods that include sets of off-center, real trajectories for both classically integrable and chaotic dynamical systems that completely capture the dynamical transport. The connections between those methods and GGWPD are developed in a way that enables a far more practical implementation of GGWPD. The generally complex saddle-point trajectories at its foundation are found using a multidimensional Newton-Raphson root search method that begins with the set of off-center, real trajectories. This is possible because there is a one-to-one correspondence. The neighboring trajectories associated with each off-center, real trajectory form a path that crosses a unique saddle; there are exceptions that are straightforward to identify. The method is applied to the kicked rotor to demonstrate the accuracy improvement as a function of ℏ that comes with using the saddle-point trajectories.
NASA Astrophysics Data System (ADS)
Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo
Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with the center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arms. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical to crescent shape and the angle averaged PDFs from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. This crescent shape of 2D PDF provides a clear physical picture of the non-zero mean displacements observed in boomerangs particles.
2D Affine and Projective Shape Analysis.
Bryner, Darshan; Klassen, Eric; Huiling Le; Srivastava, Anuj
2014-05-01
Current techniques for shape analysis tend to seek invariance to similarity transformations (rotation, translation, and scale), but certain imaging situations require invariance to larger groups, such as affine or projective groups. Here we present a general Riemannian framework for shape analysis of planar objects where metrics and related quantities are invariant to affine and projective groups. Highlighting two possibilities for representing object boundaries-ordered points (or landmarks) and parameterized curves-we study different combinations of these representations (points and curves) and transformations (affine and projective). Specifically, we provide solutions to three out of four situations and develop algorithms for computing geodesics and intrinsic sample statistics, leading up to Gaussian-type statistical models, and classifying test shapes using such models learned from training data. In the case of parameterized curves, we also achieve the desired goal of invariance to re-parameterizations. The geodesics are constructed by particularizing the path-straightening algorithm to geometries of current manifolds and are used, in turn, to compute shape statistics and Gaussian-type shape models. We demonstrate these ideas using a number of examples from shape and activity recognition.
Focal ratio degradation: a new perspective
NASA Astrophysics Data System (ADS)
Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss
2008-07-01
We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.
Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time
NASA Astrophysics Data System (ADS)
Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef
2018-04-01
Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.
Lifting primordial non-Gaussianity above the noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com
2016-08-01
Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
Lyu, Siwei; Simoncelli, Eero P.
2011-01-01
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599
Focus on quantum Einstein gravity Focus on quantum Einstein gravity
NASA Astrophysics Data System (ADS)
Ambjorn, Jan; Reuter, Martin; Saueressig, Frank
2012-09-01
The gravitational asymptotic safety program summarizes the attempts to construct a consistent and predictive quantum theory of gravity within Wilson's generalized framework of renormalization. Its key ingredient is a non-Gaussian fixed point of the renormalization group flow which controls the behavior of the theory at trans-Planckian energies and renders gravity safe from unphysical divergences. Provided that the fixed point comes with a finite number of ultraviolet-attractive (relevant) directions, this construction gives rise to a consistent quantum field theory which is as predictive as an ordinary, perturbatively renormalizable one. This opens up the exciting possibility of establishing quantum Einstein gravity as a fundamental theory of gravity, without introducing supersymmetry or extra dimensions, and solely based on quantization techniques that are known to work well for the other fundamental forces of nature. While the idea of gravity being asymptotically safe was proposed by Steven Weinberg more than 30 years ago [1], the technical tools for investigating this scenario only emerged during the last decade. Here a key role is played by the exact functional renormalization group equation for gravity, which allows the construction of non-perturbative approximate solutions for the RG-flow of the gravitational couplings. Most remarkably, all solutions constructed to date exhibit a suitable non-Gaussian fixed point, lending strong support to the asymptotic safety conjecture. Moreover, the functional renormalization group also provides indications that the central idea of a non-Gaussian fixed point providing a safe ultraviolet completion also carries over to more realistic scenarios where gravity is coupled to a suitable matter sector like the standard model. These theoretical successes also triggered a wealth of studies focusing on the consequences of asymptotic safety in a wide range of phenomenological applications covering the physics of black holes, early time cosmology and the big bang, as well as TeV-scale gravity models testable at the Large Hadron Collider. On different grounds, Monte-Carlo studies of the gravitational partition function based on the discrete causal dynamical triangulations approach provide an a priori independent avenue towards unveiling the non-perturbative features of gravity. As a highlight, detailed simulations established that the phase diagram underlying causal dynamical triangulations contains a phase where the triangulations naturally give rise to four-dimensional, macroscopic universes. Moreover, there are indications for a second-order phase transition that naturally forms the discrete analog of the non-Gaussian fixed point seen in the continuum computations. Thus there is a good chance that the discrete and continuum computations will converge to the same fundamental physics. This focus issue collects a series of papers that outline the current frontiers of the gravitational asymptotic safety program. We hope that readers get an impression of the depth and variety of this research area as well as our excitement about the new and ongoing developments. References [1] Weinberg S 1979 General Relativity, an Einstein Centenary Survey ed S W Hawking and W Israel (Cambridge: Cambridge University Press)
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
The semantic Stroop effect: An ex-Gaussian analysis.
White, Darcy; Risko, Evan F; Besner, Derek
2016-10-01
Previous analyses of the standard Stroop effect (which typically uses color words that form part of the response set) have documented effects on mean reaction times in hundreds of experiments in the literature. Less well known is the fact that ex-Gaussian analyses reveal that such effects are seen in (a) the mean of the normal distribution (mu), as well as in (b) the standard deviation of the normal distribution (sigma) and (c) the tail (tau). No ex-Gaussian analysis exists in the literature with respect to the semantically based Stroop effect (which contrasts incongruent color-associated words with, e.g., neutral controls). In the present experiments, we investigated whether the semantically based Stroop effect is also seen in the three ex-Gaussian parameters. Replicating previous reports, color naming was slower when the color was carried by an irrelevant (but incongruent) color-associated word (e.g., sky, tomato) than when the control items consisted of neutral words (e.g., keg, palace) in each of four experiments. An ex-Gaussian analysis revealed that this semantically based Stroop effect was restricted to the arithmetic mean and mu; no semantic Stroop effect was observed in tau. These data are consistent with the views (1) that there is a clear difference in the source of the semantic Stroop effect, as compared to the standard Stroop effect (evidenced by the presence vs. absence of an effect on tau), and (2) that interference associated with response competition on incongruent trials in tau is absent in the semantic Stroop effect.
NASA Astrophysics Data System (ADS)
Colomb, Warren; Sarkar, Susanta K.
2015-06-01
We would like to thank all the commentators for their constructive comments on our paper. Commentators agree that a proper analysis of noisy single-molecule data is important for extracting meaningful and accurate information about the system. We concur with their views and indeed, motivating an accurate analysis of experimental data is precisely the point of our paper. After a model about the system of interest is constructed based on the experimental single-molecule data, it is very helpful to simulate the model to generate theoretical single-molecule data and analyze exactly the same way. In our experience, such self-consistent approach involving experiments, simulations, and analyses often forces us to revise our model and make experimentally testable predictions. In light of comments from the commentators with different expertise, we would also like to point out that a single model should be able to connect different experimental techniques because the underlying science does not depend on the experimental techniques used. Wohland [1] has made a strong case for fluorescence correlation spectroscopy (FCS) as an important experimental technique to bridge single-molecule and ensemble experiments. FCS is a very powerful technique that can measure ensemble parameters with single-molecule sensitivity. Therefore, it is logical to simulate any proposed model and predict both single-molecule data and FCS data, and confirm with experimental data. Fitting the diffraction-limited point spread function (PSF) of an isolated fluorescent marker to localize a labeled biomolecule is a critical step in many single-molecule tracking experiments. Flyvbjerg et al. [2] have rigorously pointed out some important drawbacks of the prevalent practice of fitting diffraction-limited PSF with 2D Gaussian. As we try to achieve more accurate and precise localization of biomolecules, we need to consider subtle points as mentioned by Flyvbjerg et al. Shepherd [3] has mentioned specific examples of PSF that have been used for localization and has rightly mentioned the importance of detector noise in single-molecule localization. Meroz [4] has pointed out more clearly that the signal itself could be noisy and it is necessary to distinguish the noise of interest from the background noise. Krapf [5] has pointed out different origins of fluctuations in biomolecular systems and commented on their possible Gaussian and non-Gaussian nature. Importance of noise along with the possibility that the noise itself can be the signal of interest has been discussed in our paper [6]. However, Meroz [4] and Krapf [5] have provided specific examples to guide the readers in a better way. Sachs et al. [7] have discussed kinetic analysis in the presence of indistinguishable states and have pointed to the free software for the general kinetic analysis that originated from their research.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
A well-balanced scheme for Ten-Moment Gaussian closure equations with source term
NASA Astrophysics Data System (ADS)
Meena, Asha Kumari; Kumar, Harish
2018-02-01
In this article, we consider the Ten-Moment equations with source term, which occurs in many applications related to plasma flows. We present a well-balanced second-order finite volume scheme. The scheme is well-balanced for general equation of state, provided we can write the hydrostatic solution as a function of the space variables. This is achieved by combining hydrostatic reconstruction with contact preserving, consistent numerical flux, and appropriate source discretization. Several numerical experiments are presented to demonstrate the well-balanced property and resulting accuracy of the proposed scheme.
Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso
2017-01-01
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963
Chandra's Ultimate Angular Resolution: Studies of the HRC-I Point Spread Function
NASA Astrophysics Data System (ADS)
Juda, Michael; Karovska, M.
2010-03-01
The Chandra High Resolution Camera (HRC) should provide an ideal imaging match to the High-Resolution Mirror Assembly (HRMA). The laboratory-measured intrinsic resolution of the HRC is 20 microns FWHM. HRC event positions are determined via a centroiding method rather than by using discrete pixels. This event position reconstruction method and any non-ideal performance of the detector electronics can introduce distortions in event locations that, when combined with spacecraft dither, produce artifacts in source images. We compare ray-traces of the HRMA response to "on-axis" observations of AR Lac and Capella as they move through their dither patterns to images produced from filtered event lists to characterize the effective intrinsic PSF of the HRC-I. A two-dimensional Gaussian, which is often used to represent the detector response, is NOT a good representation of the intrinsic PSF of the HRC-I; the actual PSF has a sharper peak and additional structure which will be discussed. This work was supported under NASA contract NAS8-03060.
Destabilization of confined granular packings due to fluid flow
NASA Astrophysics Data System (ADS)
Monloubou, Martin; Sandnes, Bjørnar
2016-04-01
Fluid flow through granular materials can cause fluidization when fluid drag exceeds the frictional stress within the packing. Fluid driven failure of granular packings is observed in both natural and engineered settings, e.g. soil liquefaction and flowback of proppants during hydraulic fracturing operations. We study experimentally the destabilization and flow of an unconsolidated granular packing subjected to a point source fluid withdrawal using a model system consisting of a vertical Hele-Shaw cell containing a water-grain mixture. The fluid is withdrawn from the cell at a constant rate, and the emerging flow patterns are imaged in time-lapse mode. Using Particle Image Velocimetry (PIV), we show that the granular flow gets localized in a narrow channel down the center of the cell, and adopts a Gaussian velocity profile similar to those observed in dry grain flows in silos. We investigate the effects of the experimental parameters (flow rate, grain size, grain shape, fluid viscosity) on the packing destabilization, and identify the physical mechanisms responsible for the observed complex flow behaviour.
CALIPSO Detection of an Asian Tropopause Aerosol Layer
NASA Technical Reports Server (NTRS)
Vemier, J.-P.; Thomason, L. W.; Kar, J.
2011-01-01
The first four years of the CALIPSO lidar measurements have revealed the existence of an aerosol layer at the tropopause level associated with the Asian monsoon season in June, July and August. This Asian Tropopause Aerosol Layer (ATAL) extends geographically from Eastern Mediterranean (down to North Africa) to Western China (down to Thailand), and vertically from 13 to 18 km. The Scattering Ratio inferred from CALIPSO shows values between 1.10. 1.15 on average with associated depolarization ratio of less than 5%. The Gaussian distribution of the points indicates that the mean value is statistically driven by an enhancement of the background aerosol level and not by episodic events such as a volcanic eruption or cloud contamination. Further satellite observations of aerosols and gases as well as field campaigns are urgently needed to characterize this layer, which is likely to be a significant source of non-volcanic aerosols for the global upper troposphere with a potential impact on its radiative and chemical balance
Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yingchuan; College of Mathematics and Physics, University of South China, Hengyang 421001; Kuang Leman
2011-05-15
In this paper, we propose a theoretical scheme of many-ghost imaging in terms of Nth-order correlated thermal light. We obtain the Gaussian thin lens equations in the many-ghost imaging protocol. We show that it is possible to produce N-1 ghost images of an object at different places in a nonlocal fashion by means of a higher order correlated imaging process with an Nth-order correlated thermal source and correlation measurements. We investigate the visibility of the ghost images in the scheme and obtain the upper bounds of the visibility for the Nth-order correlated thermal-light ghost imaging. It is found that themore » visibility of the ghost images can be dramatically enhanced when the order of correlation becomes larger. It is pointed out that the many-ghost imaging phenomenon is an observable physical effect induced by higher order coherence or higher order correlations of optical fields.« less
Testing Inflation with Large Scale Structure: Connecting Hopes with Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Marcello; Baldauf, T.; Bond, J. Richard
2014-12-15
The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude fmore » $$loc\\atop{NL}$$ (f$$eq\\atop{NL}$$), natural target levels of sensitivity are Δf$$loc, eq\\atop{NL}$$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.« less
Super-resolution structured illumination in optically thick specimens without fluorescent tagging
NASA Astrophysics Data System (ADS)
Hoffman, Zachary R.; DiMarzio, Charles A.
2017-11-01
This research extends the work of Hoffman et al. to provide both sectioning and super-resolution using random patterns within thick specimens. Two methods of processing structured illumination in reflectance have been developed without the need for a priori knowledge of either the optical system or the modulation patterns. We explore the use of two deconvolution algorithms that assume either Gaussian or sparse priors. This paper will show that while both methods accomplish their intended objective, the sparse priors method provides superior resolution and contrast against all tested targets, providing anywhere from ˜1.6× to ˜2× resolution enhancement. The methods developed here can reasonably be implemented to work without a priori knowledge about the patterns or point spread function. Further, all experiments are run using an incoherent light source, unknown random modulation patterns, and without the use of fluorescent tagging. These additional modifications are challenging, but the generalization of these methods makes them prime candidates for clinical application, providing super-resolved noninvasive sectioning in vivo.
Modelling plume dispersion pattern from a point source using spatial auto-correlational analysis
NASA Astrophysics Data System (ADS)
Ujoh, F.; Kwabe, D.
2014-02-01
The main objective of the study is to estimate the rate and model the pattern of plume rise from Dangote Cement Plc. A handheld Garmin GPS was employed for collection of coordinates at a single kilometre graduation from the centre of the factory to 10 kilometres. Plume rate was estimated using the Gaussian model while Kriging, using ArcGIS, was adopted for modelling the pattern of plume dispersion over a 10 kilometre radius around the factory. ANOVA test was applied for statistical analysis of the plume coefficients. The results indicate that plume dispersion is generally high with highest values recorded for the atmospheric stability classes A and B, while the least values are recorded for the atmospheric stability classes F and E. The variograms derived from the Kriging reveal that the pattern of plume dispersion is outwardly radial and omni-directional. With the exception of 3 stability sub-classes (DH, EH and FH) out of a total of 12, the 24-hour average of particulate matters (PM10 and PM2.5) within the study area is outrageously higher (highest value at 21392.3) than the average safety limit of 150 ug/m3 - 230 ug/m3 prescribed by the 2006 WHO guidelines. This indicates the presence of respirable and non-respirable pollutants that create poor ambient air quality. The study concludes that the use of geospatial technology can be adopted in modelling dispersion of pollutants from a point source. The study recommends ameliorative measures to reduce the rate of plume emission at the factory.
NASA Astrophysics Data System (ADS)
Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei
2018-06-01
For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less