Bayesian analysis of anisotropic cosmologies: Bianchi VIIh and WMAP
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Josset, T.; Feeney, S. M.; Peiris, H. V.; Lasenby, A. N.
2013-12-01
We perform a definitive analysis of Bianchi VIIh cosmologies with Wilkinson Microwave Anisotropy Probe (WMAP) observations of the cosmic microwave background (CMB) temperature anisotropies. Bayesian analysis techniques are developed to study anisotropic cosmologies using full-sky and partial-sky masked CMB temperature data. We apply these techniques to analyse the full-sky internal linear combination (ILC) map and a partial-sky masked W-band map of WMAP 9 yr observations. In addition to the physically motivated Bianchi VIIh model, we examine phenomenological models considered in previous studies, in which the Bianchi VIIh parameters are decoupled from the standard cosmological parameters. In the two phenomenological models considered, Bayes factors of 1.7 and 1.1 units of log-evidence favouring a Bianchi component are found in full-sky ILC data. The corresponding best-fitting Bianchi maps recovered are similar for both phenomenological models and are very close to those found in previous studies using earlier WMAP data releases. However, no evidence for a phenomenological Bianchi component is found in the partial-sky W-band data. In the physical Bianchi VIIh model, we find no evidence for a Bianchi component: WMAP data thus do not favour Bianchi VIIh cosmologies over the standard Λ cold dark matter (ΛCDM) cosmology. It is not possible to discount Bianchi VIIh cosmologies in favour of ΛCDM completely, but we are able to constrain the vorticity of physical Bianchi VIIh cosmologies at (ω/H)0 < 8.6 × 10-10 with 95 per cent confidence.
Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.
2006-01-01
A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.
NASA Astrophysics Data System (ADS)
Dunkley, J.; Spergel, D. N.; Komatsu, E.; Hinshaw, G.; Larson, D.; Nolta, M. R.; Odegard, N.; Page, L.; Bennett, C. L.; Gold, B.; Hill, R. S.; Jarosik, N.; Weiland, J. L.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.
2009-08-01
We describe a sampling method to estimate the polarized cosmic microwave background (CMB) signal from observed maps of the sky. We use a Metropolis-within-Gibbs algorithm to estimate the polarized CMB map, containing Q and U Stokes parameters at each pixel, and its covariance matrix. These can be used as inputs for cosmological analyses. The polarized sky signal is parameterized as the sum of three components: CMB, synchrotron emission, and thermal dust emission. The polarized Galactic components are modeled with spatially varying power-law spectral indices for the synchrotron, and a fixed power law for the dust, and their component maps are estimated as by-products. We apply the method to simulated low-resolution maps with pixels of side 7.2 deg, using diagonal and full noise realizations drawn from the WMAP noise matrices. The CMB maps are recovered with goodness of fit consistent with errors. Computing the likelihood of the E-mode power in the maps as a function of optical depth to reionization, τ, for fixed temperature anisotropy power, we recover τ = 0.091 ± 0.019 for a simulation with input τ = 0.1, and mean τ = 0.098 averaged over 10 simulations. A "null" simulation with no polarized CMB signal has maximum likelihood consistent with τ = 0. The method is applied to the five-year WMAP data, using the K, Ka, Q, and V channels. We find τ = 0.090 ± 0.019, compared to τ = 0.086 ± 0.016 from the template-cleaned maps used in the primary WMAP analysis. The synchrotron spectral index, β, averaged over high signal-to-noise pixels with standard deviation σ(β) < 0.25, but excluding ~6% of the sky masked in the Galactic plane, is -3.03 ± 0.04. This estimate does not vary significantly with Galactic latitude, although includes an informative prior. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
Information gains from cosmological probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandis, S.; Seehars, S.; Refregier, A.
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release.more » We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w . We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and (H{sub 0}) measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 σ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.« less
Joint Bayesian Component Separation and CMB Power Spectrum Estimation
NASA Technical Reports Server (NTRS)
Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.
2008-01-01
We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.
NASA Astrophysics Data System (ADS)
Weiland, J. L.; Odegard, N.; Hill, R. S.; Wollack, E.; Hinshaw, G.; Greason, M. R.; Jarosik, N.; Page, L.; Bennett, C. L.; Dunkley, J.; Gold, B.; Halpern, M.; Kogut, A.; Komatsu, E.; Larson, D.; Limon, M.; Meyer, S. S.; Nolta, M. R.; Smith, K. M.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.
2011-02-01
We present WMAP seven-year observations of bright sources which are often used as calibrators at microwave frequencies. Ten objects are studied in five frequency bands (23-94 GHz): the outer planets (Mars, Jupiter, Saturn, Uranus, and Neptune) and five fixed celestial sources (Cas A, Tau A, Cyg A, 3C274, and 3C58). The seven-year analysis of Jupiter provides temperatures which are within 1σ of the previously published WMAP five-year values, with slightly tighter constraints on variability with orbital phase (0.2% ± 0.4%), and limits (but no detections) on linear polarization. Observed temperatures for both Mars and Saturn vary significantly with viewing geometry. Scaling factors are provided which, when multiplied by the Wright Mars thermal model predictions at 350 μm, reproduce WMAP seasonally averaged observations of Mars within ~2%. An empirical model is described which fits brightness variations of Saturn due to geometrical effects and can be used to predict the WMAP observations to within 3%. Seven-year mean temperatures for Uranus and Neptune are also tabulated. Uncertainties in Uranus temperatures are 3%-4% in the 41, 61, and 94 GHz bands; the smallest uncertainty for Neptune is 8% for the 94 GHz band. Intriguingly, the spectrum of Uranus appears to show a dip at ~30 GHz of unidentified origin, although the feature is not of high statistical significance. Flux densities for the five selected fixed celestial sources are derived from the seven-year WMAP sky maps and are tabulated for Stokes I, Q, and U, along with polarization fraction and position angle. Fractional uncertainties for the Stokes I fluxes are typically 1% to 3%. Source variability over the seven-year baseline is also estimated. Significant secular decrease is seen for Cas A and Tau A: our results are consistent with a frequency-independent decrease of about 0.53% per year for Cas A and 0.22% per year for Tau A. We present WMAP polarization data with uncertainties of a few percent for Tau A. Where appropriate, WMAP results are compared against previous findings in the literature. With an absolute calibration uncertainty of 0.2%, WMAP data are a valuable asset for calibration work. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Host, Ole; Lahav, Ofer; Abdalla, Filipe B.
We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less
Confronting quasi-exponential inflation with WMAP seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in
2012-04-01
We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.
Statistical isotropy violation in WMAP CMB maps resulting from non-circular beams
NASA Astrophysics Data System (ADS)
Das, Santanu; Mitra, Sanjit; Rotti, Aditya; Pant, Nidhi; Souradeep, Tarun
2016-06-01
Statistical isotropy (SI) of cosmic microwave background (CMB) fluctuations is a key observational test to validate the cosmological principle underlying the standard model of cosmology. While a detection of SI violation would have immense cosmological ramification, it is important to recognise their possible origin in systematic effects of observations. The WMAP seven year (WMAP-7) release claimed significant deviation from SI in the bipolar spherical harmonic (BipoSH) coefficients and . Here we present the first explicit reproduction of the measurements reported in WMAP-7, confirming that beam systematics alone can completely account for the measured SI violation. The possibility of such a systematic origin was alluded to in WMAP-7 paper itself and other authors but not as explicitly so as to account for it accurately. We simulate CMB maps using the actual WMAP non-circular beams and scanning strategy. Our estimated BipoSH spectra from these maps match the WMAP-7 results very well. It is also evident that only a very careful and adequately detailed modelling, as carried out here, can conclusively establish that the entire signal arises from non-circular beam effect. This is important since cosmic SI violation signals are expected to be subtle and dismissing a large SI violation signal as observational artefact based on simplistic plausibility arguments run the serious risk of "throwing the baby out with the bathwater".
INCREASING EVIDENCE FOR HEMISPHERICAL POWER ASYMMETRY IN THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoftuft, J.; Eriksen, H. K.; Hansen, F. K.
Motivated by the recent results of Hansen et al. concerning a noticeable hemispherical power asymmetry in the Wilkinson Microwave Anisotropy Probe (WMAP) data on small angular scales, we revisit the dipole-modulated signal model introduced by Gordon et al.. This model assumes that the true cosmic microwave background signal consists of a Gaussian isotropic random field modulated by a dipole, and is characterized by an overall modulation amplitude, A, and a preferred direction, p-hat. Previous analyses of this model have been restricted to very low resolution (i.e., 3.{sup 0}6 pixels, a smoothing scale of 9 deg. FWHM, and l {approx}< 40)more » due to computational cost. In this paper, we double the angular resolution (i.e., 1.{sup 0}8 pixels and 4.{sup 0}5 FWHM smoothing scale), and compute the full corresponding posterior distribution for the five-year WMAP data. The results from our analysis are the following: the best-fit modulation amplitude for l {<=} 64 and the ILC data with the WMAP KQ85 sky cut is A = 0.072 {+-} 0.022, nonzero at 3.3{sigma}, and the preferred direction points toward Galactic coordinates (l, b) = (224 deg., - 22 deg.) {+-} 24 deg. The corresponding results for l {approx}< 40 from earlier analyses were A = 0.11 {+-} 0.04 and (l, b) = (225 deg. - 27 deg.). The statistical significance of a nonzero amplitude thus increases from 2.8{sigma} to 3.3{sigma} when increasing l{sub max} from 40 to 64, and all results are consistent to within 1{sigma}. Similarly, the Bayesian log-evidence difference with respect to the isotropic model increases from {delta}ln E = 1.8 to {delta}ln E = 2.6, ranking as 'strong evidence' on the Jeffreys' scale. The raw best-fit log-likelihood difference increases from {delta}ln L = 6.1 to {delta}ln L = 7.3. Similar, and often slightly stronger, results are found for other data combinations. Thus, we find that the evidence for a dipole power distribution in the WMAP data increases with l in the five-year WMAP data set, in agreement with the reports of Hansen et al.« less
Wilkinson Microwave Anisotropy Probe (WMAP) Attitude Estimation Filter Comparison
NASA Technical Reports Server (NTRS)
Harman, Richard R.
2005-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft was launched in June of 2001. The sensor complement of WMAP consists of two Autonomous Star Trackers (ASTs), two Fine Sun Sensors (FSSs), and a gyro package which contains redundancy about one of the WMAP body axes. The onboard attitude estimation filter consists of an extended Kalman filter (EKF) solving for attitude and gyro bias errors which are then resolved into a spacecraft attitude quaternion and gyro bias. A pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion, rate, and gyro bias. In this paper, the performance of the two filters is compared for the two major control modes of WMAP: inertial mode and observation mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiland, J. L.; Odegard, N.; Hill, R. S.
2011-02-01
We present WMAP seven-year observations of bright sources which are often used as calibrators at microwave frequencies. Ten objects are studied in five frequency bands (23-94 GHz): the outer planets (Mars, Jupiter, Saturn, Uranus, and Neptune) and five fixed celestial sources (Cas A, Tau A, Cyg A, 3C274, and 3C58). The seven-year analysis of Jupiter provides temperatures which are within 1{sigma} of the previously published WMAP five-year values, with slightly tighter constraints on variability with orbital phase (0.2% {+-} 0.4%), and limits (but no detections) on linear polarization. Observed temperatures for both Mars and Saturn vary significantly with viewing geometry.more » Scaling factors are provided which, when multiplied by the Wright Mars thermal model predictions at 350 {mu}m, reproduce WMAP seasonally averaged observations of Mars within {approx}2%. An empirical model is described which fits brightness variations of Saturn due to geometrical effects and can be used to predict the WMAP observations to within 3%. Seven-year mean temperatures for Uranus and Neptune are also tabulated. Uncertainties in Uranus temperatures are 3%-4% in the 41, 61, and 94 GHz bands; the smallest uncertainty for Neptune is 8% for the 94 GHz band. Intriguingly, the spectrum of Uranus appears to show a dip at {approx}30 GHz of unidentified origin, although the feature is not of high statistical significance. Flux densities for the five selected fixed celestial sources are derived from the seven-year WMAP sky maps and are tabulated for Stokes I, Q, and U, along with polarization fraction and position angle. Fractional uncertainties for the Stokes I fluxes are typically 1% to 3%. Source variability over the seven-year baseline is also estimated. Significant secular decrease is seen for Cas A and Tau A: our results are consistent with a frequency-independent decrease of about 0.53% per year for Cas A and 0.22% per year for Tau A. We present WMAP polarization data with uncertainties of a few percent for Tau A. Where appropriate, WMAP results are compared against previous findings in the literature. With an absolute calibration uncertainty of 0.2%, WMAP data are a valuable asset for calibration work.« less
Application of Bayesian model averaging to measurements of the primordial power spectrum
NASA Astrophysics Data System (ADS)
Parkinson, David; Liddle, Andrew R.
2010-11-01
Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940
Results from the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Komatsu, E.; Bennett, Charles L.; Komatsu, Eiichiro
2015-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) mapped the distribution of temperature and polarization over the entire sky in five microwave frequency bands. These full-sky maps were used to obtain measurements of temperature and polarization anisotropy of the cosmic microwave background with the unprecedented accuracy and precision. The analysis of two-point correlation functions of temperature and polarization data gives determinations of the fundamental cosmological parameters such as the age and composition of the universe, as well as the key parameters describing the physics of inflation, which is further constrained by three-point correlation functions. WMAP observations alone reduced the flat ? cold dark matter (Lambda Cold Dark Matter) cosmological model (six) parameter volume by a factor of > 68, 000 compared with pre-WMAP measurements. The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino background, flatness of spatial geometry of the universe, a deviation from a scale-invariant spectrum of initial scalar fluctuations, and that the current universe is undergoing an accelerated expansion. The WMAP observations provide the strongest ever support for inflation; namely, the structures we see in the universe originate from quantum fluctuations generated during inflation.
NASA Technical Reports Server (NTRS)
Dunkey, J.; Komatsu, E.; Nolta, M.R.; Spergel, D.N.; Larson, D.; Hinshaw, G.; Page, L.; Bennett, C.L.; Gold, B.; Jarosik, N.;
2008-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, has mapped out the Cosmic Microwave Background with unprecedented accuracy over the whole sky. Its observations have led to the establishment of a simple concordance cosmological model for the contents and evolution of the universe, consistent with virtually all other astronomical measurements. The WMAP first-year and three-year data have allowed us to place strong constraints on the parameters describing the ACDM model. a flat universe filled with baryons, cold dark matter, neutrinos. and a cosmological constant. with initial fluctuations described by nearly scale-invariant power law fluctuations, as well as placing limits on extensions to this simple model (Spergel et al. 2003. 2007). With all-sky measurements of the polarization anisotropy (Kogut et al. 2003; Page et al. 2007), two orders of magnitude smaller than the intensity fluctuations. WMAP has not only given us an additional picture of the universe as it transitioned from ionized to neutral at redshift z approx.1100. but also an observation of the later reionization of the universe by the first stars. In this paper we present cosmological constraints from WMAP alone. for both the ACDM model and a set of possible extensions. We also consider tlle consistency of WMAP constraints with other recent astronomical observations. This is one of seven five-year WMAP papers. Hinshaw et al. (2008) describe the data processing and basic results. Hill et al. (2008) present new beam models arid window functions, Gold et al. (2008) describe the emission from Galactic foregrounds, and Wright et al. (2008) the emission from extra-Galactic point sources. The angular power spectra are described in Nolta et al. (2008), and Komatsu et al. (2008) present and interpret cosmological constraints based on combining WMAP with other data. WMAP observations are used to produce full-sky maps of the CMB in five frequency bands centered at 23, 33, 41, 61, and 94 GHz (Hinshaw et al. 2008). With five years of data, we are now able to place better limits on the ACDM model. as well as to move beyond it to test the composition of the universe. details of reionization. sub-dominant components, characteristics of inflation, and primordial fluctuations. We have more than doubled the amount of polarized data used for cosmological analysis. allowing a better measure of the large-scale E-mode signal (Nolta et al. 2008). To this end we describe an alternative way to remove Galactic foregrounds from low resolution polarization maps in which Galactic emission is marginalized over, providing a cross-check of our results. With longer integration we also better probe the second and third acoustic peaks in the temperature angular power spectrum, and have many more year-to-year difference maps available for cross-checking systematic effects (Hinshaw et al. 2008).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
2015-05-01
A number of studies of WMAP and Planck claimed the low multipole (specially quadrupole) power deficiency in CMB power spectrum. Anomaly in the orientations of the low multipoles have also been claimed. There is a possibility that the power deficiency at low multipoles may not be of primordial origin and is only an observation artifact coming from the scan procedure adapted in the WMAP or Planck satellites. Therefore, it is always important to investigate all the observational artifacts that can mimic them. The CMB dipole which is much higher than the quadrupole can leak to the higher multipoles due tomore » the non-symmetric beam shape of the WMAP or Planck. We observe that a non-negligible amount of power from the dipole can get transferred to the quadrupole and the higher multipoles due to the non-symmetric beam shapes and contaminate the observed measurements. The orientation of the quadrupole generated by this power transfer is surprisingly very close to the quadrupole observed from the WMAP and Planck maps. However, our analysis shows that the orientation of the quadrupole can not be explained using only the dipole power leakage. In this paper we calculate the amount of quadrupole power leakage for different WMAP bands. For Planck we present the results in terms of upper limits on asymmetric beam parameters that can lead to significant amount of power leakage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roshi, D. Anish; Plunkett, Adele; Rosero, Viviana
2012-04-10
Murray and Raham used the Wilkinson Microwave Anisotropy Probe (WMAP) free-free foreground emission map to identify diffuse ionized regions (DIRs) in the Galaxy. It has been found that the 18 most luminous WMAP sources produce more than half of the total ionizing luminosity of the Galaxy. We observed radio recombination lines (RRLs) toward the luminous WMAP source G49.75-0.45 with the Green Bank Telescope near 1.4 GHz. Hydrogen RRL is detected toward the source but no helium line is detected, implying that n{sub He{sup +}}/n{sub H{sup +}}< 0.024. This limit puts severe constraint on the ionizing spectrum. The total ionizing luminositymore » of G49 (3.05 Multiplication-Sign 10{sup 51} s{sup -1}) is {approx}2.8 times the luminosity of all radio H II regions within this DIR and this is generally the case for other WMAP sources. Murray and Rahman propose that the additional ionization is due to massive clusters ({approx}7.5 Multiplication-Sign 10{sup 3} M{sub Sun} for G49) embedded in the WMAP sources. Such clusters should produce enough photons with energy {>=}24.6 eV to fully ionize helium in the DIR. Our observations rule out a simple model with G49 ionized by a massive cluster. We also considered 'leaky' H II region models for the ionization of the DIR, suggested by Lockman and Anantharamaiah, but these models also cannot explain our observations. We estimate that the helium ionizing photons need to be attenuated by {approx}>10 times to explain the observations. If selective absorption of He ionizing photons by dust is causing this additional attenuation, then the ratio of dust absorption cross sections for He and H ionizing photons should be {approx}>6.« less
NASA Astrophysics Data System (ADS)
Page, L.; Barnes, C.; Hinshaw, G.; Spergel, D. N.; Weiland, J. L.; Wollack, E.; Bennett, C. L.; Halpern, M.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wright, E. L.
2003-09-01
Knowledge of the beam profiles is of critical importance for interpreting data from cosmic microwave background experiments. In this paper, we present the characterization of the in-flight optical response of the WMAP satellite. The main-beam intensities have been mapped to <=-30 dB of their peak values by observing Jupiter with the satellite in the same observing mode as for CMB observations. The beam patterns closely follow the prelaunch expectations. The full width at half-maximum is a function of frequency and ranges from 0.82d at 23 GHz to 0.21d at 94 GHz; however, the beams are not Gaussian. We present (a) the beam patterns for all 10 differential radiometers, showing that the patterns are substantially independent of polarization in all but the 23 GHz channel; (b) the effective symmetrized beam patterns that result from WMAP's compound spin observing pattern; (c) the effective window functions for all radiometers and the formalism for propagating the window function uncertainty; and (d) the conversion factor from point-source flux to antenna temperature. A summary of the systematic uncertainties, which currently dominate our knowledge of the beams, is also presented. The constancy of Jupiter's temperature within a frequency band is an essential check of the optical system. The tests enable us to report a calibration of Jupiter to 1%-3% accuracy relative to the CMB dipole. WMAP is the result of a partnership between Princeton University and the NASA Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
NASA Technical Reports Server (NTRS)
Hajian, Amir; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John William; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, John R.; Brown, Ben;
2011-01-01
We present a new calibration method based on cross-correlations with the Wilkinson Microwave Anisotropy Probe (WMAP) and apply it to data from the Atacama Cosmology Telescope (ACT). ACT's observing strategy and mapmaking procedure allows an unbiased reconstruction of the modes in the maps over a wide range of multipoles. By directly matching the ACT maps to WMAP observations in the multipole range of 400 < I < 1000, we determine the absolute calibration with an uncertainty of 2% in temperature. The precise measurement of the calibration error directly impacts the uncertainties in the cosmological parameters estimated from the ACT power spectra. We also present a combined map based on ACT and WMAP data that has a high signal-to-noise ratio over a wide range of multipoles.
NASA Astrophysics Data System (ADS)
Verschuur, Gerrit L.
2014-06-01
The archive of IRIS, PLANCK and WMAP data available at the IRSA website of IPAC allows the apparent associations between galactic neutral hydrogen (HI) features and small-scale structure in WMAP and PLANCK data to be closely examined. In addition, HI new observations made with the Green Bank Telescope are used to perform a statistical test of putative associations. It is concluded that attention should be paid to the possibility that some of the small-scale structure found in WMAP and PLANCK data harbors the signature of a previously unrecognized source of high-frequency continuum emission in the Galaxy.
NASA Astrophysics Data System (ADS)
Barnes, C.; Hill, R. S.; Hinshaw, G.; Page, L.; Bennett, C. L.; Halpern, M.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.
2003-09-01
Since the Galactic center is ~1000 times brighter than fluctuations in the cosmic microwave background (CMB), CMB experiments must carefully account for stray Galactic pickup. We present the level of contamination due to sidelobes for the first-year CMB maps produced by the Wilkinson Microwave Anisotropy Probe (WMAP) observatory. For each radiometer, full 4π sr antenna gain patterns are determined from a combination of numerical prediction and ground-based and space-based measurements. These patterns are convolved with the WMAP first-year sky maps and observatory scan pattern to generate the expected sidelobe signal contamination, for both intensity and polarized microwave sky maps. When the main beams are outside of the Galactic plane, we find rms values for the expected sidelobe pickup of 15, 2.1, 2.0, 0.3, and 0.5 μK for the K, Ka, Q, V, and W bands, respectively. Except for at the K band, the rms polarized contamination is <<1 μK. Angular power spectra of the Galactic pickup are presented. WMAP is the result of a partnership between Princeton University and the NASA Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verschuur, G. L.; Schmelz, J. T., E-mail: gverschu@naic.edu
Small-scale features observed by Wilkinson Microwave Anisotropy Probe ( WMAP ) and PLANCK in the frequency range of 22–90 GHz show a nearly flat spectrum, which meets with expectations that they originate in the early universe. However, free–free emission from electrons in small angular scale galactic sources that suffer beam dilution very closely mimic the observed spectrum in this frequency range. Fitting such a model to the PLANCK and WMAP data shows that the angular size required to fit the data is comparable to the angular width of associated H i filaments found in the Galactic Arecibo L-Band Feed Array-Hmore » isurvey data. Also, the temperature of the electrons is found to be in the range of 100–300 K. The phenomenon revealed by these data may contribute to a more precise characterization of the foreground masks required to interpret the cosmological aspect of PLANCK and WMAP data.« less
The Undiscovered World Cosmology from WMAP
NASA Technical Reports Server (NTRS)
Bennett, Charles
2004-01-01
The first findings from a year of WMAP satellite operations provide a detailed full sky map of the cosmic microwave background radiation. The observed temperature anisotropy, combined with the associated polarization information, encodes a wealth of cosmological information. The results have implications for the history, content, and evolution of the universe, and its large scale properties. These and other aspects of the mission will be discussed.
The Undiscovered World: Cosmology from WMAP
NASA Technical Reports Server (NTRS)
Bennett, Charles
2004-01-01
The first findings from a year of WMAP satellite operations provide a detailed full sky map of the cosmic microwave background radiation. The observed temperature anisotropy, combined with the associated polarization information, encodes a wealth of cosmological information. The results have implications for the history, content, and evolution of the universe, and its large scale properties. These and other aspects of the mission will be discussed.
Lack of large-angle TT correlations persists in WMAP and Planck
NASA Astrophysics Data System (ADS)
Copi, Craig J.; Huterer, Dragan; Schwarz, Dominik J.; Starkman, Glenn D.
2015-08-01
The lack of large-angle correlations in the observed microwave background temperature fluctuations persists in the final-year maps from Wilkinson Microwave Anisotropy Probe (WMAP) and the first cosmological data release from Planck. We find a statistically robust and significant result: p-values for the missing correlations lying below 0.24 per cent (i.e. evidence at more than 3σ) for foreground cleaned maps, in complete agreement with previous analyses based upon earlier WMAP data. A cut-sky analysis of the Planck HFI 100 GHz frequency band, the `cleanest CMB channel' of this instrument, returns a p-value as small as 0.03 per cent, based on the conservative mask defined by WMAP. These findings are in stark contrast to expectations from the inflationary Lambda cold dark matter model and still lack a convincing explanation. If this lack of large-angle correlations is a true feature of our Universe, and not just a statistical fluke, then the cosmological dipole must be considerably smaller than that predicted in the best-fitting model.
First observational tests of eternal inflation.
Feeney, Stephen M; Johnson, Matthew C; Mortlock, Daniel J; Peiris, Hiranya V
2011-08-12
The eternal inflation scenario predicts that our observable Universe resides inside a single bubble embedded in a vast inflating multiverse. We present the first observational tests of eternal inflation, performing a search for cosmological signatures of collisions with other bubble universes in cosmic microwave background data from the WMAP satellite. We conclude that the WMAP 7-year data do not warrant augmenting the cold dark matter model with a cosmological constant with bubble collisions, constraining the average number of detectable bubble collisions on the full sky N(s) < 1.6 at 68% C.L. Data from the Planck satellite can be used to more definitively test the bubble-collision hypothesis.
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results
NASA Technical Reports Server (NTRS)
Bennett, C. L.; Larson, D.; Weiland, J. L.; Jaorsik, N.; Hinshaw, G.; Odegard, N.; Smith, K. M.; Hill, R. S.; Gold, B.; Halpern, M;
2013-01-01
We present the final nine-year maps and basic results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission. The full nine-year analysis of the time-ordered data provides updated characterizations and calibrations of the experiment. We also provide new nine-year full sky temperature maps that were processed to reduce the asymmetry of the effective beams. Temperature and polarization sky maps are examined to separate cosmic microwave background (CMB) anisotropy from foreground emission, and both types of signals are analyzed in detail.We provide new point source catalogs as well as new diffuse and point source foreground masks. An updated template-removal process is used for cosmological analysis; new foreground fits are performed, and new foreground reduced are presented.We nowimplement an optimal C(exp -1)1 weighting to compute the temperature angular power spectrum. The WMAP mission has resulted in a highly constrained Lambda-CDM cosmological model with precise and accurate parameters in agreement with a host of other cosmological measurements. When WMAP data are combined with finer scale CMB, baryon acoustic oscillation, and Hubble constant measurements, we find that big bang nucleosynthesis is well supported and there is no compelling evidence for a non-standard number of neutrino species (N(sub eff) = 3.84 +/- 0.40). The model fit also implies that the age of the universe is (sub 0) = 13.772 +/- 0.059 Gyr, and the fit Hubble constant is H(sub 0) = 69.32 +/- 0.80 km/s/ Mpc. Inflation is also supported: the fluctuations are adiabatic, with Gaussian random phases; the detection of a deviation of the scalar spectral index from unity, reported earlier by the WMAP team, now has high statistical significance (n(sub s) = 0.9608+/-0.0080); and the universe is close to flat/Euclidean (Omega = -0.0027+0.0039/-0.0038). Overall, the WMAP mission has resulted in a reduction of the cosmological parameter volume by a factor of 68,000 for the standard six-parameter ?Lambda-CDM model, based on CMB data alone. For a model including tensors, the allowed seven-parameter volume has been reduced by a factor 117,000. Other cosmological observations are in accord with the CMB predictions, and the combined data reduces the cosmological parameter volume even further.With no significant anomalies and an adequate goodness of fit, the inflationary flat Lambda-CDM model and its precise and accurate parameters rooted in WMAP data stands as the standard model of cosmology.
Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Larson, D.; Komatsu, E.; Spergel, D. N.; Bennett, C. L.; Dunkley, J.; Nolta, M. R.; Halpern, M.; Hill, R. S.; Odegard, N.;
2013-01-01
We present cosmological parameter constraints based on the final nine-year Wilkinson Microwave Anisotropy Probe (WMAP) data, in conjunction with a number of additional cosmological data sets. The WMAP data alone, and in combination, continue to be remarkably well fit by a six-parameter Lambda-CDM model. When WMAP data are combined with measurements of the high-l cosmic microwave background anisotropy, the baryon acoustic oscillation scale, and the Hubble constant, the matter and energy densities Omega(sub b)h(exp 2), Omega(sub c)h(exp 2)and Omega(sub Lambda), are each determined to a precision of approx. 1.5%. The amplitude of the primordial spectrum is measured to within 3%, and there is now evidence for a tilt in the primordial spectrum at the 5 sigma level, confirming the first detection of tilt based on the five-year WMAP data. At the end of the WMAP mission, the nine-year data decrease the allowable volume of the six-dimensional Lambda-CDM parameter space by a factor of 68,000 relative to pre-WMAP measurements. We investigate a number of data combinations and show that their Lambda-CDM parameter fits are consistent. New limits on deviations from the six-parameter model are presented, for example: the fractional contribution of tensor modes is limited to r < 0.13 (95% CL); the spatial curvature parameter is limited to Omega(sub kappa) = (0.0027 (sub +0.0039) (sup -0.0038;) the summed mass of neutrinos is limited to Sigma M(sub nu) < 0.44 eV (95% CL); and the number of relativistic species is found to lie within N(sub eff) = 3.84 +/- 0+/-40, when the full data are analyzed. The joint constraint on N(sub eff) and the primordial helium abundance, Y(sub He), agrees with the prediction of standard big bang nucleosynthesis. We compare recent Planck measurements of the Sunyaev-Zel'dovich effect with our seven-year measurements, and show their mutual agreement. Our analysis of the polarization pattern around temperature extrema is updated. This confirms a fundamental prediction of the standard cosmological model and provides a striking illustration of acoustic oscillations and adiabatic initial conditions in the early universe.
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
Bayesian Analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey
2007-01-01
There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background! Experiments designed to map the microwave sky are returning a flood of data (time streams of instrument response as a beam is swept over the sky) at several different frequencies (from 30 to 900 GHz), all with different resolutions and noise properties. The resulting analysis challenge is to estimate, and quantify our uncertainty in, the spatial power spectrum of the cosmic microwave background given the complexities of "missing data", foreground emission, and complicated instrumental noise. Bayesian formulation of this problem allows consistent treatment of many complexities including complicated instrumental noise and foregrounds, and can be numerically implemented with Gibbs sampling. Gibbs sampling has now been validated as an efficient, statistically exact, and practically useful method for low-resolution (as demonstrated on WMAP 1 and 3 year temperature and polarization data). Continuing development for Planck - the goal is to exploit the unique capabilities of Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters.
Primordial power spectrum: a complete analysis with the WMAP nine-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in
2013-07-01
We have improved further the error sensitive Richardson-Lucy deconvolution algorithm making it applicable directly on the un-binned measured angular power spectrum of Cosmic Microwave Background observations to reconstruct the form of the primordial power spectrum. This improvement makes the application of the method significantly more straight forward by removing some intermediate stages of analysis allowing a reconstruction of the primordial spectrum with higher efficiency and precision and with lower computational expenses. Applying the modified algorithm we fit the WMAP 9 year data using the optimized reconstructed form of the primordial spectrum with more than 300 improvement in χ{sup 2}{sub eff}more » with respect to the best fit power-law. This is clearly beyond the reach of other alternative approaches and reflects the efficiency of the proposed method in the reconstruction process and allow us to look for any possible feature in the primordial spectrum projected in the CMB data. Though the proposed method allow us to look at various possibilities for the form of the primordial spectrum, all having good fit to the data, proper error-analysis is needed to test for consistency of theoretical models since, along with possible physical artefacts, most of the features in the reconstructed spectrum might be arising from fitting noises in the CMB data. Reconstructed error-band for the form of the primordial spectrum using many realizations of the data, all bootstrapped and based on WMAP 9 year data, shows proper consistency of power-law form of the primordial spectrum with the WMAP 9 data at all wave numbers. Including WMAP polarization data in to the analysis have not improved much our results due to its low quality but we expect Planck data will allow us to make a full analysis on CMB observations on both temperature and polarization separately and in combination.« less
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, D.; Bennett, C. L.; Gold, B.
2011-02-01
The WMAP mission has produced sky maps from seven years of observations at L2. We present the angular power spectra derived from the seven-year maps and discuss the cosmological conclusions that can be inferred from WMAP data alone. With the seven-year data, the temperature (TT) spectrum measurement has a signal-to-noise ratio per multipole that exceeds unity for l < 919; and in band powers of width {Delta}l = 10, the signal-to-noise ratio exceeds unity up to l = 1060. The third acoustic peak in the TT spectrum is now well measured by WMAP. In the context of a flat {Lambda}CDMmore » model, this improvement allows us to place tighter constraints on the matter density from WMAP data alone, {Omega}{sub m} h {sup 2} = 0.1334{sup +0.0056}{sub -0.0055}, and on the epoch of matter-radiation equality, z{sub eq} = 3196{sup +134}{sub -133}. The temperature-polarization (TE) spectrum is detected in the seven-year data with a significance of 20{sigma}, compared to 13{sigma} with the five-year data. We now detect the second dip in the TE spectrum near l {approx} 450 with high confidence. The TB and EB spectra remain consistent with zero, thus demonstrating low systematic errors and foreground residuals in the data. The low-l EE spectrum, a measure of the optical depth due to reionization, is detected at 5.5{sigma} significance when averaged over l = 2-7: l(l + 1)C {sup EE}{sub l}/(2{pi}) = 0.074{sup +0.034}{sub -0.025} {mu}K{sup 2} (68% CL). We now detect the high-l, 24 {<=} l {<=} 800, EE spectrum at over 8{sigma}. The BB spectrum, an important probe of gravitational waves from inflation, remains consistent with zero; when averaged over l = 2-7, l(l + 1)C {sup BB}{sub l}/(2{pi}) < 0.055 {mu}K{sup 2} (95% CL). The upper limit on tensor modes from polarization data alone is a factor of two lower with the seven-year data than it was using the five-year data. The data remain consistent with the simple {Lambda}CDM model: the best-fit TT spectrum has an effective {chi}{sup 2} of 1227 for 1170 degrees of freedom, with a probability to exceed of 9.6%. The allowable volume in the six-dimensional space of {Lambda}CDM parameters has been reduced by a factor of 1.5 relative to the five-year volume, while the {Lambda}CDM model that allows for tensor modes and a running scalar spectral index has a factor of three lower volume when fit to the seven-year data. We test the parameter recovery process for bias and find that the scalar spectral index, n{sub s} , is biased high, but only by 0.09{sigma}, while the remaining parameters are biased by <0.15{sigma}. The improvement in the third peak measurement leads to tighter lower limits from WMAP on the number of relativistic degrees of freedom (e.g., neutrinos) in the early universe: N{sub eff}>2.7(95%CL). Also, using WMAP data alone, the primordial helium mass fraction is found to be Y{sub He} = 0.28{sup +0.14}{sub -0.15}, and with data from higher-resolution cosmic microwave background experiments included, we now establish the existence of pre-stellar helium at >3{sigma}. These new WMAP measurements provide important tests of big bang cosmology.« less
CORRELATION ANALYSIS BETWEEN TIBET AS-γ TeV COSMIC RAY AND WMAP NINE-YEAR DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Qian-Qing; Zhang, Shuang-Nan, E-mail: zhangsn@ihep.ac.cn
2015-08-01
The WMAP team subtracted template-based foreground models to produce foreground-reduced maps, and masked point sources and uncertain sky regions directly; however, whether foreground residuals exist in the WMAP foreground-reduced maps is still an open question. Here, we use Pearson correlation coefficient analysis with AS-γ TeV cosmic ray (CR) data to probe possible foreground residuals in the WMAP nine-year data. The correlation results between the CR and foreground-contained maps (WMAP foreground-unreduced maps, WMAP template-based, and Maximum Entropy Method foreground models) suggest that: (1) CRs can trace foregrounds in the WMAP data; (2) at least some TeV CRs originate from the Milkymore » Way; (3) foregrounds may be related to the existence of CR anisotropy (loss-cone and tail-in structures); (4) there exist differences among different types of foregrounds in the decl. range of <15°. Then, we generate 10,000 mock cosmic microwave background (CMB) sky maps to describe the cosmic variance, which is used to measure the effect of the fluctuations of all possible CMB maps to the correlations between CR and CMB maps. Finally, we do correlation analysis between the CR and WMAP foreground-reduced maps, and find that: (1) there are significant anticorrelations; and (2) the WMAP foreground-reduced maps are credible. However, the significant anticorrelations may be accidental, and the higher signal-to-noise ratio Planck SMICA map cannot reject the hypothesis of accidental correlations. We therefore can only conclude that the foreground residuals exist with ∼95% probability.« less
The Effect of Systematics on Polarized Spectral Indices
NASA Astrophysics Data System (ADS)
Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.
2013-02-01
We study four particularly bright polarized compact objects (Tau A, Vir A, 3C 273, and For A) in the 7 year Wilkinson Microwave Anisotropy Probe (WMAP) sky maps, with the goal of understanding potential systematics involved in the estimation of foreground spectral indices. First, we estimate the spectral index, the polarization angle, the polarization fraction, and the apparent size and shape of these objects when smoothed to a nominal resolution of 1° FWHM. Second, we compute the spectral index as a function of polarization orientation, α. Because these objects are approximately point sources with constant polarization angle, this function should be constant in the absence of systematics. However, for the K and Ka band WMAP data we find strong index variations for all four sources. For Tau A, we find a spectral index of β = -2.59 ± 0.03 for α = 30°, and β = -2.03 ± 0.01 for α = 50°. On the other hand, the spectral index between the Ka and Q bands is found to be stable. A simple elliptical Gaussian toy model with parameters matching those observed in Tau A reproduces the observed signal, and shows that the spectral index is particularly sensitive to the detector polarization angle. Based on these findings, we first conclude that estimation of spectral indices with the WMAP K band polarization data at 1° scales is not robust. Second, we note that these issues may be of concern for ground-based and sub-orbital experiments that use the WMAP polarization measurements of Tau A for calibration of gain and polarization angles.
First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Foreground Emission
NASA Astrophysics Data System (ADS)
Bennett, C. L.; Hill, R. S.; Hinshaw, G.; Nolta, M. R.; Odegard, N.; Page, L.; Spergel, D. N.; Weiland, J. L.; Wright, E. L.; Halpern, M.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.
2003-09-01
The WMAP mission has mapped the full sky to determine the geometry, content, and evolution of the universe. Full-sky maps are made in five microwave frequency bands to separate the temperature anisotropy of the cosmic microwave background (CMB) from foreground emission, including diffuse Galactic emission and Galactic and extragalactic point sources. We define masks that excise regions of high foreground emission, so CMB analyses can be carried out with minimal foreground contamination. We also present maps and spectra of the individual emission components, leading to an improved understanding of Galactic astrophysical processes. The effectiveness of template fits to remove foreground emission from the WMAP data is also examined. These efforts result in a CMB map with minimal contamination and a demonstration that the WMAP CMB power spectrum is insensitive to residual foreground emission. We use a maximum entropy method to construct a model of the Galactic emission components. The observed total Galactic emission matches the model to less than 1%, and the individual model components are accurate to a few percent. We find that the Milky Way resembles other normal spiral galaxies between 408 MHz and 23 GHz, with a synchrotron spectral index that is flattest (βs~-2.5) near star-forming regions, especially in the plane, and steepest (βs~-3) in the halo. This is consistent with a picture of relativistic cosmic-ray electron generation in star-forming regions and diffusion and convection within the plane. The significant synchrotron index steepening out of the plane suggests a diffusion process in which the halo electrons are trapped in the Galactic potential long enough to suffer synchrotron and inverse Compton energy losses and hence a spectral steepening. The synchrotron index is steeper in the WMAP bands than in lower frequency radio surveys, with a spectral break near 20 GHz to βs<-3. The modeled thermal dust spectral index is also steep in the WMAP bands, with βd~2.2. Our model is driven to these conclusions by the low level of total foreground contamination at ~60 GHz. Microwave and Hα measurements of the ionized gas agree well with one another at about the expected levels. Spinning dust emission is limited to <~5% of the Ka-band foreground emission, assuming a thermal dust distribution with a cold neutral medium spectrum and a monotonically decreasing synchrotron spectrum. A catalog of 208 point sources is presented. The reliability of the catalog is 98%; i.e., we expect five of the 208 sources to be statistically spurious. The mean spectral index of the point sources is α~0 (β~-2). Derived source counts suggest a contribution to the anisotropy power from unresolved sources of (15.0+/-1.4)×10-3 μK2 sr at Q band and negligible levels at V band and W band. The Sunyaev-Zeldovich effect is shown to be a negligible ``contamination'' to the maps. WMAP is the result of a partnership between Princeton University and the NASA Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang Bizhu; Zhang Shuangnan; Lieu, Richard
2010-01-01
The spectral variation of the cosmic microwave background (CMB) as observed by WMAP was tested using foreground reduced WMAP5 data, by producing subtraction maps at the 1 deg. angular resolution between the two cosmological bands of V and W, for masked sky areas that avoid the Galactic disk. The resulting V - W map revealed a non-acoustic signal over and above the WMAP5 pixel noise, with two main properties. First, it possesses quadrupole power at the approx1 muK level which may be attributed to foreground residuals. Second, it fluctuates also at all values of l> 2, especially on the 1more » deg. scale (200 approx< l approx< 300). The behavior is random and symmetrical about zero temperature with an rms approx7 muK, or 10% of the maximum CMB anisotropy, which would require a 'cosmic conspiracy' among the foreground components if it is a consequence of their existence. Both anomalies must be properly diagnosed and corrected if 'precision' cosmology is the claim. The second anomaly is, however, more interesting because it opens the question on whether the CMB anisotropy genuinely represents primordial density seeds.« less
NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.
Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less
Induced CMB quadrupole from pointing offsets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, Adam; Scott, Douglas; Sigurdson, Kris, E-mail: adammoss@phas.ubc.ca, E-mail: dscott@phas.ubc.ca, E-mail: krs@phas.ubc.ca
2011-01-01
Recent claims in the literature have suggested that the WMAP quadrupole is not primordial in origin, and arises from an aliasing of the much larger dipole field because of incorrect satellite pointing. We attempt to reproduce this result and delineate the key physics leading to the effect. We find that, even if real, the induced quadrupole would be smaller than the WMAP value. We discuss reasons why the WMAP data are unlikely to suffer from this particular systematic effect, including the implications for observations of point sources. Given this evidence against the reality of the effect, the similarity between themore » pointing-offset-induced signal and the actual quadrupole then appears to be quite puzzling. However, we find that the effect arises from a convolution between the gradient of the dipole field and anisotropic coverage of the scan direction at each pixel. There is something of a directional conspiracy here — the dipole signal lies close to the Ecliptic Plane, and its direction, together with the WMAP scan strategy, results in a strong coupling to the Y{sub 2,−1} component in Ecliptic co-ordinates. The dominant strength of this component in the measured quadrupole suggests that one should exercise increased caution in interpreting its estimated amplitude. The Planck satellite has a different scan strategy which does not so directly couple the dipole and quadrupole in this way and will soon provide an independent measurement.« less
A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission
NASA Astrophysics Data System (ADS)
Chen, Xi
The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.
NASA Technical Reports Server (NTRS)
Weiland, J.L.; Hill, R.S.; Odegard, 3.; Larson, D.; Bennett, C.L.; Dunkley, J.; Jarosik, N.; Page, L.; Spergel, D.N.; Halpern, M.;
2008-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between the atomic density derived from WMAP and the density inferred from the deuterium abundance is an important test of the standard big bang model. Determination of the acoustic scale at redshift z = 1090. Similarly, the recent measurement of baryon acoustic oscillations (BAO) in the galaxy power spectrum (Eisenstein et al. 2005) has determined the acoustic scale at redshift z approx. 0.35. When combined, these standard rulers accurately measure the geometry of the universe and the properties of the dark energy. These data require a nearly flat universe dominated by dark energy consistent with a cosmological constant. Precise determination of the Hubble Constant, in conjunction with BAO observations. Even when allowing curvature (Omega(sub 0) does not equal 1) and a free dark energy equation of state (w does not equal -1), the acoustic data determine the Hubble constant to within 3%. The measured value is in excellent agreement with independent results from the Hubble Key Project (Freedman et al. 2001), providing yet another important consistency test for the standard model. Significant constraint of the basic properties of the primordial fluctuations. The anti-correlation seen in the temperature/polarization (TE) correlation spectrum on 4deg scales implies that the fluctuations are primarily adiabatic and rule out defect models and isocurvature models as the primary source of fluctuations (Peiris et al. 2003).
First Evidence of Running Cosmic Vacuum: Challenging the Concordance Model
NASA Astrophysics Data System (ADS)
Solà, Joan; Gómez-Valent, Adrià; de Cruz Pérez, Javier
2017-02-01
Despite the fact that a rigid {{Λ }}-term is a fundamental building block of the concordance ΛCDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density {ρ }{{Λ }} together with a dynamical gravitational coupling G or a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the ΛCDM. In this paper, we discuss these “running vacuum models” (RVMs), in which {ρ }{{Λ }}={ρ }{{Λ }}(H) consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding universe. By performing an overall fit to the cosmological observables SN Ia+BAO+H(z)+LSS+BBN+CMB (in which the WMAP9, Planck 2013, and Planck 2015 data are taken into account), we find that the class of RVMs appears significantly more favored than the ΛCDM, namely, at an unprecedented level of ≳ 4.2σ . Furthermore, the Akaike and Bayesian information criteria confirm that the dynamical RVMs are strongly preferred compared to the conventional rigid {{Λ }}-picture of the cosmic evolution.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP1) Observations: Galactic Foreground Emission
NASA Technical Reports Server (NTRS)
Gold, B.; Bennett, C.L.; Larson, D.; Hill, R.S.; Odegard, N.; Weiland, J.L.; Hinshaw, G.; Kogut, A.; Wollack, E.; Page, L.;
2008-01-01
We present a new estimate of foreground emission in the WMAP data, using a Markov chain Monte Carlo (MCMC) method. The new technique delivers maps of each foreground component for a variety of foreground models, error estimates of the uncertainty of each foreground component, and provides an overall goodness-of-fit measurement. The resulting foreground maps are in broad agreement with those from previous techniques used both within the collaboration and by other authors. We find that for WMAP data, a simple model with power-law synchrotron, free-free, and thermal dust components fits 90% of the sky with a reduced X(sup 2) (sub v) of 1.14. However, the model does not work well inside the Galactic plane. The addition of either synchrotron steepening or a modified spinning dust model improves the fit. This component may account for up to 14% of the total flux at Ka-band (33 GHz). We find no evidence for foreground contamination of the CMB temperature map in the 85% of the sky used for cosmological analysis.
The Cosmic Microwave Background Radiation - A Unique Window on the Early Universe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2009-01-01
The cosmic microwave background radiation is the remnant heat from the Big Bang. It provides us with a unique probe of conditions in the early universe, long before any organized structures had yet formed. The anisotropy in the radiation's brightness yields important clues about primordial structure and additionally provides a wealth of information about the physics of the early universe. Within the framework of inflationary dark matter models, observations of the anisotropy on sub-degree angular scales reveals the signatures of acoustic oscillations of the photon-baryon fluid at a redshift of approximately 1100. Data from the first five years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature and polarization anisotropy. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. WMAP, part of NASA's Explorers program, was launched on June 30, 2001. The WMAP satellite was produced in a partnership between the Goddard Space Flight Center and Princeton University. The WMAP team also includes researchers at the Johns Hopkins University; the Canadian Institute of Theoretical Astrophysics; University of Texas; Oxford University; University of Chicago; Brown University; University of British Columbia; and University of California, Los Angeles.
The Cosmic Microwave Background Radiation - A Unique Window on the Early Universe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2008-01-01
The cosmic microwave background radiation is the remnant heat from the Big Bang. It provides us with a unique probe of conditions in the early universe, long before any organized structures had yet formed. The anisotropy in the radiation's brightness yields important clues about primordial structure and additionally provides a wealth of information about the physics of the early universe. Within the framework of inflationary dark matter models, observations of the anisotropy on sub-degree angular scales reveals the signatures of acoustic oscillations of the photon-baryon fluid at a redshift of approximately 1100. Data from the first five years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature and polarization anisotropy. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. WMAP, part of NASA's Explorers program, was launched on June 30, 2001. The WMAP satellite was produced in a partnership between the Goddard Space Flight Center and Princeton University. The WMAP team also includes researchers at Johns Hopkins University; the Canadian Institute of Theoretical Astrophysics; University of Texas; Oxford University; University of Chicago; Brown university; University of British Columbia; and University of California, Los Angeles.
The Cosmic Microwave Background Radiation-A Unique Window on the Early Universe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary
2010-01-01
The cosmic microwave background radiation is the remnant heat from the Big Bang. It provides us with a unique probe of conditions in the early universe, long before any organized structures had yet formed. The anisotropy in the radiation's brightness yields important clues about primordial structure and additionally provides a wealth of information about the physics of the early universe. Within the framework of inflationary dark matter models, observations of the anisotropy on sub-degree angular scales reveals the signatures of acoustic oscillations of the photon-baryon fluid at a redshift of 11 00. Data from the first seven years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature and polarization anisotropy. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. WMAP, part of NASA's Explorers program, was launched on June 30, 2001. The WMAP satellite was produced in a partnership between the Goddard Space Flight Center and Princeton University. The WMAP team also includes researchers at the Johns Hopkins University; the Canadian Institute of Theoretical Astrophysics; University of Texas; Oxford University; University of Chicago; Brown University; University of British Columbia; and University of California, Los Angeles.
Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions
NASA Astrophysics Data System (ADS)
Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.
2009-02-01
Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
NASA Astrophysics Data System (ADS)
Skulachev, Dmitrii P.
2010-07-01
A comparison is made of cosmic microwave background anisotropy data obtained from the WMAP satellite in 2001 - 2006 and from the Relikt-1 satellite in 1983 - 1984. It is shown that low-temperature area found by Relikt-1 is the location of the 'coldest spot' of the WMAP radiomap. The mutual correlation of the two datasets is estimated and found to be positive for all sky regions surveyed. The conclusion is made that with the 98% probability, the Relikt-1 experiment had detected the same signal that was later identified by WMAP. A discussion is given of whether the Relikt-1 experiment parameters were chosen correctly.
Taking the Measure of the Universe: Cosmology from the WMAP Mission
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2006-01-01
The data from the first three years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission will be discussed. WMAP, part of NASA's Explorers program, was launched on June 30,2001. The WMAP satellite was produced in a partnership between the Goddard Space Flight Center and Princeton University. The WMAP team also includes researchers at the Johns Hopkins University; the Canadian Institute of Theoretical Astrophysics; University of Texas; Cornel1 University; University of Chicago; Brown University; University of British Columbia; University of Pennsylvania; and University of California, Los Angeles
NASA Astrophysics Data System (ADS)
Xu, Lixin
2012-06-01
In this paper, the holographic dark energy model, where the future event horizon is taken as an IR cutoff, is confronted by using currently available cosmic observational data sets which include type Ia supernovae, baryon acoustic oscillation, and cosmic microwave background radiation from full information of WMAP 7-yr data. Via the Markov chain Monte Carlo method, we obtain the values of model parameter c=0.696-0.0737-0.132-0.190+0.0736+0.159+0.264 with 1, 2, 3σ regions. Therefore, one can conclude that at at least 3σ level the future Universe will be dominated by phantom-like dark energy. It is not consistent with positive energy condition, however this condition must be satisfied to derive the holographic bound. It implies that the current cosmic observational data points disfavor the holographic dark energy model.
WMAP - A Portrait of the Early Universe
NASA Technical Reports Server (NTRS)
Wollack, Edward J.
2008-01-01
A host of astrophysical observations suggest that early Universe was incredibly hot, dense, and homogeneous. A powerful probe of this time is provided by the relic radiation which we refer to today as the Cosmic Microwave Background (CMB). Images produced from this light contain the earliest glimpse of the Universe after the 'Big Bang' and the signature of the evolution of its contents. By exploiting these clues, constraints on the age, mass density, and geometry of the early Universe can be derived. A brief history of the evolution of the microwave radiometer systems and map making approaches used in advancing these aspects our understanding of cosmological will be reviewed. In addition, an overview of the results from NASA's Wilkinson Microwave Anisotropy (WMAP) will be presented.
Updated reduced CMB data and constraints on cosmological parameters
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Guo, Zong-Kuan; Tang, Bo
2015-07-01
We obtain the reduced CMB data {lA, R, z∗} from WMAP9, WMAP9+BKP, Planck+WP and Planck+WP+BKP for the ΛCDM and wCDM models with or without spatial curvature. We then use these reduced CMB data in combination with low-redshift observations to put constraints on cosmological parameters. We find that including BKP results in a higher value of the Hubble constant especially when the equation of state (EOS) of dark energy and curvature are allowed to vary. For the ΛCDM model with curvature, the estimate of the Hubble constant with Planck+WP+Lensing is inconsistent with the one derived from Planck+WP+BKP at about 1.2σ confidence level (CL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farhang, M.; Bond, J. R.; Netterfield, C. B.
2013-07-01
We use the Bayesian estimation on direct T - Q - U cosmic microwave background (CMB) polarization maps to forecast errors on the tensor-to-scalar power ratio r, and hence on primordial gravitational waves, as a function of sky coverage f{sub sky}. This map-based likelihood filters the information in the pixel-pixel space into the optimal combinations needed for r detection for cut skies, providing enhanced information over a first-step linear separation into a combination of E, B, and mixed modes, and ignoring the latter. With current computational power and for typical resolutions appropriate for r detection, the large matrix inversions requiredmore » are accurate and fast. Our simulations explore two classes of experiments, with differing bolometric detector numbers, sensitivities, and observational strategies. One is motivated by a long duration balloon experiment like Spider, with pixel noise {proportional_to}{radical}(f{sub sky}) for a specified observing period. This analysis also applies to ground-based array experiments. We find that, in the absence of systematic effects and foregrounds, an experiment with Spider-like noise concentrating on f{sub sky} {approx} 0.02-0.2 could place a 2{sigma}{sub r} Almost-Equal-To 0.014 boundary ({approx}95% confidence level), which rises to 0.02 with an l-dependent foreground residual left over from an assumed efficient component separation. We contrast this with a Planck-like fixed instrumental noise as f{sub sky} varies, which gives a Galaxy-masked (f{sub sky} = 0.75) 2{sigma}{sub r} Almost-Equal-To 0.015, rising to Almost-Equal-To 0.05 with the foreground residuals. Using as the figure of merit the (marginalized) one-dimensional Shannon entropy of r, taken relative to the first 2003 WMAP CMB-only constraint, gives -2.7 bits from the 2012 WMAP9+ACT+SPT+LSS data, and forecasts of -6 bits from Spider (+ Planck); this compares with up to -11 bits for CMBPol, COrE, and PIXIE post-Planck satellites and -13 bits for a perfectly noiseless cosmic variance limited experiment. We thus confirm the wisdom of the current strategy for r detection of deeply probed patches covering the f{sub sky} minimum-error trough with balloon and ground experiments.« less
First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Foreground Emission
NASA Technical Reports Server (NTRS)
Bennett, C. L.; Hill, R. S.; Hinshaw, G.; Nolta, M. R.; Odegard, N.; Page, L.; Spergel, D. N.; Weiland, J. L.; Wright, E. L.; Halpern, M.
2003-01-01
The WMAP mission has mapped the full sky to determine the geometry, content, and evolution of the universe. Full sky maps are made in five microwave frequency bands to separate the temperature anisotropy of the cosmic microwave background (CMB) from foreground emission, including diffuse Galactic emission and Galactic and extragalactic point sources. We define masks that excise regions of high foreground emission, so CMB analyses can became out with minimal foreground contamination. We also present maps and spectra of the individual emission components, leading to an improved understanding of Galactic astrophysical processes. The effectiveness of template fits to remove foreground emission from the WMAP data is also examined. These efforts result in a CMB map with minimal contamination and a demonstration that the WMAP CMB power spectrum is insensitive to residual foreground emission. We use a Maximum Entropy Method to construct a model of the Galactic emission components. The observed total Galactic emission matches the model to less than 1% and the individual model components are accurate to a few percent. We find that the Milky Way resembles other normal spiral galaxies between 408 MHz and 23 GHz, with a synchrotron spectral index that is flattest (beta(sub s) approx. -2.5) near star-forming regions, especially in the plane, and steepest (beta(sub s) approx. -3) in the halo. This is consistent with a picture of relativistic cosmic ray electron generation in star-forming regions and diffusion and convection within the plane. The significant synchrotron index steepening out of the plane suggests a diffusion process in which the halo electrons are trapped in the Galactic potential long enough to suffer synchrotron and inverse Compton energy losses and hence a spectral steepening. The synchrotron index is steeper in the WMAP bands than in lower frequency radio surveys, with a spectral break near 20 GHz to beta(sub s) less than -3. The modeled thermal dust spectral index is also steep in the WMAP bands, with beta(sub d) approx. = 2.2. Our model is driven to these conclusions by the low level of total foreground contamination at approx. 60 GHz. Microwave and Ha measurements of the ionized gas agree well with one another at about the expected levels. Spinning dust emission is limited to less than 5% of the Ka-band foreground emission. A catalog of 208 point sources is presented. The reliability of the catalog is 98%, i.e., we expect five of the 208 sources to be statistically spurious. The mean spectral index of the point sources is alpha approx. 0(beta approx. -2). Derived source counts suggest a contribution to the anisotropy power from unresolved sources of (15.0 +/- 1.4) x 10(exp -3)micro sq K sr at Q-band and negligible levels at V-band and W-band. The Sunyaev-Zeldovich effect is shown to be a negligible "contamination" to the maps.
Comparing Planck and WMAP: Maps, Spectra, and Parameters
NASA Astrophysics Data System (ADS)
Larson, D.; Weiland, J. L.; Hinshaw, G.; Bennett, C. L.
2015-03-01
We examine the consistency of the 9 yr WMAP data and the first-release Planck data. We specifically compare sky maps, power spectra, and the inferred Λ cold dark matter (ΛCDM) cosmological parameters. Residual dipoles are seen in the WMAP and Planck sky map differences, but their amplitudes are consistent within the quoted uncertainties, and they are not large enough to explain the widely noted differences in angular power spectra at higher l. We remove the residual dipoles and use templates to remove residual Galactic foregrounds; after doing so, the residual difference maps exhibit a quadrupole and other large-scale systematic structure. We identify this structure as possibly originating from Planck’s beam sidelobe pick-up, but note that it appears to have insignificant cosmological impact. We develop an extension of the internal linear combination technique to find the minimum-variance difference between the WMAP and Planck sky maps; again we find features that plausibly originate in the Planck data. Lacking access to the Planck time-ordered data we cannot further assess these features. We examine ΛCDM model fits to the angular power spectra and conclude that the ˜2.5% difference in the spectra at multipoles greater than l˜ 100 is significant at the 3-5σ level, depending on how beam uncertainties are handled in the data. We revisit the analysis of WMAP’s beam data to address the power spectrum differences and conclude that previously derived uncertainties are robust and cannot explain the power spectrum differences. In fact, any remaining WMAP errors are most likely to exacerbate the difference. Finally, we examine the consistency of the ΛCDM parameters inferred from each data set taking into account the fact that both experiments observe the same sky, but cover different multipole ranges, apply different sky masks, and have different noise. We find that, while individual parameter values agree within the uncertainties, the six parameters taken together are discrepant at the ˜6σ level, with {χ }2}=56 for 6 degrees of freedom (probability to exceed, PTE = 3× {{10}-10}). The nature of this discrepancy is explored: of the six parameters, {{χ }2} is best improved by marginalizing over {{{Ω}c}{{h}2}, giving {χ }2}=5.2 for 5 degrees of freedom. As an exercise, we find that perturbing the WMAP window function by its dominant beam error profile has little effect on {{{Ω}c}{{h}2}, while perturbing the Planck window function by its corresponding error profile has a much greater effect on {{Ω}c}{{h}2}.
Possible detection of the M 31 rotation in WMAP data
NASA Astrophysics Data System (ADS)
de Paolis, F.; Gurzadyan, V. G.; Ingrosso, G.; Jetzer, Ph.; Nucita, A. A.; Qadir, A.; Vetrugno, D.; Kashin, A. L.; Khachatryan, H. G.; Mirzoyan, S.
2011-10-01
Data on the cosmic microwave background (CMB) radiation by the Wilkinson Microwave Anisotropy Probe (WMAP) had a profound impact on the understanding of a variety of physical processes in the early phases of the Universe and on the estimation of the cosmological parameters. Here, the 7-year WMAP data are used to trace the disk and the halo of the nearby giant spiral galaxy M 31. We analyzed the temperature excess in three WMAP bands (W, V, and Q) by dividing the region of the sky around M 31 into several concentric circular areas. An asymmetry in the mean microwave temperature in the M 31 disk along the direction of the M 31 rotation is observed with a temperature contrast up to ≃ 130 μK/pixel. We also find a temperature asymmetry in the M 31 halo, which is much weaker than for the disk, up to a galactocentric distance of about 10° (≃ 120 kpc) with a peak temperature contrast of about 40 μK/pixel. We studied the robustness of these possible detections by considering 500 random control fields in the real WMAP maps and simulating 500 sky maps from the best-fitted cosmological parameters. By comparing the obtained temperature contrast profiles with the real ones towards the M 31 galaxy, we find that the temperature asymmetry in the M 31 disk is fairly robust, while the effect in the halo is weaker. Although the confidence level of the signal is not high, if estimated purely statistically, which could be expected due to the weakness of the effect, the geometrical structure of the temperature asymmetry points towards a definite effect modulated by the rotation of the M 31 halo. This result might open a new way to probe these relatively less studied galactic objects using high-accuracy CMB measurements, such as those with the Planck satellite or planned balloon-based experiments, which could prove or disprove our conclusions. Table 1 and Figs. 4, 5 are available in electronic form at http://www.aanda.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Story, K. T.; Keisler, R.; Benson, B. A.
2013-12-10
We present a measurement of the cosmic microwave background (CMB) temperature power spectrum using data from the recently completed South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. This measurement is made from observations of 2540 deg{sup 2} of sky with arcminute resolution at 150 GHz, and improves upon previous measurements using the SPT by tripling the sky area. We report CMB temperature anisotropy power over the multipole range 650 < ℓ < 3000. We fit the SPT bandpowers, combined with the 7 yr Wilkinson Microwave Anisotropy Probe (WMAP7) data, with a six-parameter ΛCDM cosmological model and find that the two datasets aremore » consistent and well fit by the model. Adding SPT measurements significantly improves ΛCDM parameter constraints; in particular, the constraint on θ {sub s} tightens by a factor of 2.7. The impact of gravitational lensing is detected at 8.1σ, the most significant detection to date. This sensitivity of the SPT+WMAP7 data to lensing by large-scale structure at low redshifts allows us to constrain the mean curvature of the observable universe with CMB data alone to be Ω{sub k}=−0.003{sub −0.018}{sup +0.014}. Using the SPT+WMAP7 data, we measure the spectral index of scalar fluctuations to be n{sub s} = 0.9623 ± 0.0097 in the ΛCDM model, a 3.9σ preference for a scale-dependent spectrum with n{sub s} < 1. The SPT measurement of the CMB damping tail helps break the degeneracy that exists between the tensor-to-scalar ratio r and n{sub s} in large-scale CMB measurements, leading to an upper limit of r < 0.18 (95% C.L.) in the ΛCDM+r model. Adding low-redshift measurements of the Hubble constant (H {sub 0}) and the baryon acoustic oscillation (BAO) feature to the SPT+WMAP7 data leads to further improvements. The combination of SPT+WMAP7+H {sub 0}+BAO constrains n{sub s} = 0.9538 ± 0.0081 in the ΛCDM model, a 5.7σ detection of n{sub s} < 1, and places an upper limit of r < 0.11 (95% C.L.) in the ΛCDM+r model. These new constraints on n{sub s} and r have significant implications for our understanding of inflation, which we discuss in the context of selected single-field inflation models.« less
The Wilkinson Microwave Anisotropy Probe (WMAP) Source Catalog
NASA Technical Reports Server (NTRS)
Wright, E.L.; Chen, X.; Odegard, N.; Bennett, C.L.; Hill, R.S.; Hinshaw, G.; Jarosik, N.; Komatsu, E.; Nolta, M.R.; Page, L.;
2008-01-01
We present the list of point sources found in the WMAP 5-year maps. The technique used in the first-year and three-year analysis now finds 390 point sources, and the five-year source catalog is complete for regions of the sky away from the galactic plane to a 2 Jy limit, with SNR greater than 4.7 in all bands in the least covered parts of the sky. The noise at high frequencies is still mainly radiometer noise, but at low frequencies the CMB anisotropy is the largest uncertainty. A separate search of CMB-free V-W maps finds 99 sources of which all but one can be identified with known radio sources. The sources seen by WMAP are not strongly polarized. Many of the WMAP sources show significant variability from year to year, with more than a 2:l range between the minimum and maximum fluxes.
Wilkinson Microwave Anisotropy Probe (WMAP) First Year Observations: TE Polarization
NASA Technical Reports Server (NTRS)
Kogut, A.; Spergel, D. N.; Barnes, C.; Bennett, C. L.; Halpern, M.; Hinshaw, G.; Jarosik, N.; Limon, M.; Meyer, S. S.; Page, L.;
2001-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) has mapped the full sky in Stokes I, Q, and U parameters at frequencies 23, 33, 41, 61, and 94 GHz. We detect correlations between the temperature and polarization maps significant at more than 10 standard deviations. The correlations are inconsistent with instrument noise and are significantly larger than the upper limits established for potential systematic errors. The correlations are present in all WAMP frequency bands with similar amplitude from 23 to 94 GHz, and are consistent with a superposition of a CMB signal with a weak foreground. The fitted CMB component is robust against different data combinations and fitting techniques. On small angular scales (theta less than 5 deg), the WMAP data show the temperature-polarization correlation expected from adiabatic perturbations in the temperature power spectrum. The data for l greater than 20 agree well with the signal predicted solely from the temperature power spectra, with no additional free parameters. We detect excess power on large angular scales (theta greater than 10 deg) compared to predictions based on the temperature power spectra alone. The excess power is well described by reionization at redshift 11 is less than z(sub r) is less than 30 at 95% confidence, depending on the ionization history. A model-independent fit to reionization optical depth yields results consistent with the best-fit ACDM model, with best fit value t = 0.17 +/- 0.04 at 68% confidence, including systematic and foreground uncertainties. This value is larger than expected given the detection of a Gunn-Peterson trough in the absorption spectra of distant quasars, and implies that the universe has a complex ionization history: WMAP has detected the signal from an early epoch of reionization.
Cosmic Topology: Studying The Shape And Size Of Our Universe
NASA Astrophysics Data System (ADS)
Yzaguirre, Amelia; Hajian, A.
2010-01-01
The question of the size and the shape of our universe is a very old problem that has received considerable attention over the past few years. The simplest cosmological model predicts that the mean density of the universe is very close to the critical density, admitting a local geometry of the universe that is flat. Current results from different cosmological observations confirm this to the percent level accuracy. General Relativity (being a local theory) only determines local geometry, which allows for the possibility of a multiply connected universe with a zero (or small) curvature. To study the global shape, or topology, of the universe, one can use cosmological observations on large scales. In this project we investigate the possibility of a ``small universe'', that is, a compact finite space, by searching for planar symmetries in the CMB anisotropy maps provided by the five-year WMAP observations in two foreground cleaned maps (WMAP ILC map and the Tegmark, et al. (TOH) map ). Our results strongly suggest that the small universe model is not a viable topology for the universe.
Radio Source Contributions to the Microwave Sky
NASA Astrophysics Data System (ADS)
Boughn, S. P.; Partridge, R. B.
2008-03-01
Cross-correlations of the Wilkinson Microwave Anisotropy Probe (WMAP) full sky K-, Ka-, Q-, V-, and W-band maps with the 1.4 GHz NVSS source count map and the HEAO I A2 2-10 keV full sky X-ray flux map are used to constrain rms fluctuations due to unresolved microwave sources in the WMAP frequency range. In the Q band (40.7 GHz), a lower limit, taking account of only those fluctuations correlated with the 1.4 GHz radio source counts and X-ray flux, corresponds to an rms Rayleigh-Jeans temperature of ˜2 μK for a solid angle of 1 deg2 assuming that the cross-correlations are dominated by clustering, and ˜1 μK if dominated by Poisson fluctuations. The correlated fluctuations at the other bands are consistent with a β = -2.1 ± 0.4 frequency spectrum. If microwave sources are distributed similarly in redshift to the radio and X-ray sources and are similarly clustered, then the implied total rms microwave fluctuations correspond to ˜5 μK. While this value should be considered no more than a plausible estimate, it is similar to that implied by the excess, small angular scale fluctuations observed in the Q band by WMAP and is consistent with estimates made by extrapolating low-frequency source counts.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions
NASA Technical Reports Server (NTRS)
Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.;
2008-01-01
Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.
NASA Astrophysics Data System (ADS)
Davies, R. D.; Dickinson, C.; Banday, A. J.; Jaffe, T. R.; Górski, K. M.; Davis, R. J.
2006-08-01
Wilkinson Microwave Anisotropy Probe (WMAP) data when combined with ancillary data on free-free, synchrotron and dust allow an improved understanding of the spectrum of emission from each of these components. Here, we examine the sky variation at intermediate latitudes using a cross-correlation technique. In particular, we compare the observed emission in 15 selected sky regions to three `standard' templates. The free-free emission of the diffuse ionized gas is fitted by a well-known spectrum at K and Ka band, but the derived emissivity corresponds to a mean electron temperature of ~4000-5000 K. This is inconsistent with estimates from Galactic HII regions although a variation in the derived ratio of Hα to free-free intensity by a factor of ~2 is also found from region to region. The origin of the discrepancy is unclear. The anomalous emission associated with dust is clearly detected in most of the 15 fields studied. The anomalous emission correlates well with the Finkbeiner, Davis & Schlegel model 8 predictions (FDS8) at 94 GHz, with an effective spectral index between 20 and 60 GHz, of β ~ -2.85. Furthermore, the emissivity varies by a factor of ~2 from cloud to cloud. A modestly improved fit to the anomalous dust at K band is provided by modulating the template by an estimate of the dust colour temperature, specifically FDS8 × Tn. We find a preferred value n ~ 1.6, although there is a scatter from region to region. Nevertheless, the preferred index drops to zero at higher frequencies where the thermal dust emission dominates. The synchrotron emission steepens between GHz frequencies and the WMAP bands. There are indications of spectral index variations across the sky but the current data are not precise enough to accurately quantify this from region to region. Our analysis of the WMAP data indicates strongly that the dust-correlated emission at the low WMAP frequencies has a spectrum which is compatible with spinning dust; we find no evidence for a synchrotron component correlated with dust. The importance of these results for the correction of cosmic microwave background data for Galactic foreground emission is discussed.
What do parameterized Om(z) diagnostics tell us in light of recent observations?
NASA Astrophysics Data System (ADS)
Qi, Jing-Zhao; Cao, Shuo; Biesiada, Marek; Xu, Teng-Peng; Wu, Yan; Zhang, Si-Xuan; Zhu, Zong-Hong
2018-06-01
In this paper, we propose a new parametrization for Om(z) diagnostics and show how the most recent and significantly improved observations concerning the H(z) and SN Ia measurements can be used to probe the consistency or tension between the ΛCDM model and observations. Our results demonstrate that H 0 plays a very important role in the consistency test of ΛCDM with H(z) data. Adopting the Hubble constant priors from Planck 2013 and Riess, one finds considerable tension between the current H(z) data and ΛCDM model and confirms the conclusions obtained previously by others. However, with the Hubble constant prior taken from WMAP9, the discrepancy between H(z) data and ΛCDM disappears, i.e., the current H(z) observations still support the cosmological constant scenario. This conclusion is also supported by the results derived from the Joint Light-curve Analysis (JLA) SN Ia sample. The best-fit Hubble constant from the combination of H(z)+JLA ({H}0={68.81}-1.49+1.50 km s‑1 Mpc‑1) is very consistent with results derived both by Planck 2013 and WMAP9, but is significantly different from the recent local measurement by Riess.
Mapping the CMB with the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2007-01-01
The data from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission results will be discussed and commented on. WMAP, part of NASA's Explorers program, was launched on June 30,200 1. The WMAP satellite was produced in a partnership between the Goddard Space Flight Center and Princeton University. The WMAP team also includes researchers at the Johns Hopkins University; the Canadian Institute of Theoretical Astrophysics; University of Texas; University of Chicago; Brown University; University of British Columbia; and University of California, Los Angeles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib
We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrummore » using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.« less
Taking the Measure of the Universe: Cosmology from the WMAP Mission
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2003-01-01
The data from the first year of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide the first detailed full sky map of the cosmic microwave background radiation. The anisotropy in the radiation temperature provides a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission will be discussed. The WMAP satellite was built in a close partnership between Princeton University and the Goddard Space Flight Center.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Luminet, Jean-Pierre; Weeks, Jeffrey R; Riazuelo, Alain; Lehoucq, Roland; Uzan, Jean-Philippe
2003-10-09
The current 'standard model' of cosmology posits an infinite flat universe forever expanding under the pressure of dark energy. First-year data from the Wilkinson Microwave Anisotropy Probe (WMAP) confirm this model to spectacular precision on all but the largest scales. Temperature correlations across the microwave sky match expectations on angular scales narrower than 60 degrees but, contrary to predictions, vanish on scales wider than 60 degrees. Several explanations have been proposed. One natural approach questions the underlying geometry of space--namely, its curvature and topology. In an infinite flat space, waves from the Big Bang would fill the universe on all length scales. The observed lack of temperature correlations on scales beyond 60 degrees means that the broadest waves are missing, perhaps because space itself is not big enough to support them. Here we present a simple geometrical model of a finite space--the Poincaré dodecahedral space--which accounts for WMAP's observations with no fine-tuning required. The predicted density is Omega(0) approximately 1.013 > 1, and the model also predicts temperature correlations in matching circles on the sky.
Exact likelihood evaluations and foreground marginalization in low resolution WMAP data
NASA Astrophysics Data System (ADS)
Slosar, Anže; Seljak, Uroš; Makarov, Alexey
2004-06-01
The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Bassistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
We present a measurement of the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz. The measurement uses maps with 1.4' angular resolution made with data from the Atacama Cosmology Telescope (ACT). The observations cover 228 deg(sup 2) of the southern sky, in a 4 deg. 2-wide strip centered on declination 53 deg. South. The CMB at arc minute angular scales is particularly sensitive to the Silk damping scale, to the Sunyaev-Zel'dovich (SZ) effect from galaxy dusters, and to emission by radio sources and dusty galaxies. After masking the 108 brightest point sources in our maps, we estimate the power spectrum between 600 less than l less than 8000 using the adaptive multi-taper method to minimize spectral leakage and maximize use of the full data set. Our absolute calibration is based on observations of Uranus. To verify the calibration and test the fidelity of our map at large angular scales, we cross-correlate the ACT map to the WMAP map and recover the WMAP power spectrum from 250 less than l less than 1150. The power beyond the Silk damping tail of the CMB (l approximately 5000) is consistent with models of the emission from point sources. We quantify the contribution of SZ clusters to the power spectrum by fitting to a model normalized to sigma 8 = 0.8. We constrain the model's amplitude A(sub sz) less than 1.63 (95% CL). If interpreted as a measurement of as, this implies sigma (sup SZ) (sub 8) less than 0.86 (95% CL) given our SZ model. A fit of ACT and WMAP five-year data jointly to a 6-parameter ACDM model plus point sources and the SZ effect is consistent with these results.
WMAP Observatory Thermal Design and On-Orbit Thermal Performance
NASA Technical Reports Server (NTRS)
Glazer, Stuart D.; Brown, Kimberly D.; Michalek, Theodore J.; Ancarrow, Walter C.
2003-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) observatory, launched June 30, 2001, is designed to measure the cosmic microwave background radiation with unprecedented precision and accuracy while orbiting the second Lagrange point (L2). The instrument cold stage must be cooled passively to <95K, and systematic thermal variations in selected instrument components controlled to less than 0.5 mK (rms) per spin period. This paper describes the thermal design and testing of the WMAP spacecraft and instrument. Flight thermal data for key spacecraft and instrument components are presented from launch through the first year of mission operations. Effects of solar flux variation due to the Earth's elliptical orbit about the sun, surface thermo-optical property degradations, and solar flares on instrument thermal stability are discussed.
Searching for CPT violation with cosmic microwave background data from WMAP and BOOMERANG.
Feng, Bo; Li, Mingzhe; Xia, Jun-Qing; Chen, Xuelei; Zhang, Xinmin
2006-06-09
We search for signatures of Lorentz and violations in the cosmic microwave background (CMB) temperature and polarization anisotropies by using the Wilkinson Microwave Anisotropy Probe (WMAP) and the 2003 flight of BOOMERANG (B03) data. We note that if the Lorentz and symmetries are broken by a Chern-Simons term in the effective Lagrangian, which couples the dual electromagnetic field strength tensor to an external four-vector, the polarization vectors of propagating CMB photons will get rotated. Using the WMAP data alone, one could put an interesting constraint on the size of such a term. Combined with the B03 data, we found that a nonzero rotation angle of the photons is mildly favored: [Formula: See Text].
Jee, M. James; Tyson, J. Anthony; Hilbert, Stefan; ...
2016-06-15
Here, we present a tomographic cosmic shear study from the Deep Lens Survey (DLS), which, providing a limiting magnitudemore » $${r}_{\\mathrm{lim}}\\sim 27$$ ($$5\\sigma $$), is designed as a precursor Large Synoptic Survey Telescope (LSST) survey with an emphasis on depth. Using five tomographic redshift bins, we study their auto- and cross-correlations to constrain cosmological parameters. We use a luminosity-dependent nonlinear model to account for the astrophysical systematics originating from intrinsic alignments of galaxy shapes. We find that the cosmological leverage of the DLS is among the highest among existing $$\\gt 10$$ deg2 cosmic shear surveys. Combining the DLS tomography with the 9 yr results of the Wilkinson Microwave Anisotropy Probe (WMAP9) gives $${{\\rm{\\Omega }}}_{m}={0.293}_{-0.014}^{+0.012}$$, $${\\sigma }_{8}={0.833}_{-0.018}^{+0.011}$$, $${H}_{0}={68.6}_{-1.2}^{+1.4}\\;{\\text{km s}}^{-1}\\;{{\\rm{Mpc}}}^{-1}$$, and $${{\\rm{\\Omega }}}_{b}=0.0475\\pm 0.0012$$ for ΛCDM, reducing the uncertainties of the WMAP9-only constraints by ~50%. When we do not assume flatness for ΛCDM, we obtain the curvature constraint $${{\\rm{\\Omega }}}_{k}=-{0.010}_{-0.015}^{+0.013}$$ from the DLS+WMAP9 combination, which, however, is not well constrained when WMAP9 is used alone. The dark energy equation-of-state parameter w is tightly constrained when baryonic acoustic oscillation (BAO) data are added, yielding $$w=-{1.02}_{-0.09}^{+0.10}$$ with the DLS+WMAP9+BAO joint probe. The addition of supernova constraints further tightens the parameter to $$w=-1.03\\pm 0.03$$. Our joint constraints are fully consistent with the final Planck results and also with the predictions of a ΛCDM universe.« less
NASA Technical Reports Server (NTRS)
Bennett, Charles
2004-01-01
The first findings from a year of WMAP satellite operations provide a detailed full sky map of the cosmic microwave background radiation. The observed temperature anisotropy, combined with the associated polarization information, encodes a wealth of cosmological information. The results have implications for the history, content, and evolution of the universe, and its large scale properties. These and other aspects of the mission will be discussed.
Polynomial interpretation of multipole vectors
NASA Astrophysics Data System (ADS)
Katz, Gabriel; Weeks, Jeff
2004-09-01
Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.
Wilkinson Microwave Anisotropy Probe (WMAP) Battery Operations Problem Resolution Team (PRT)
NASA Technical Reports Server (NTRS)
Keys, Denney J.
2010-01-01
The NASA Technical Discipline Fellow for Electrical Power, was requested to form a Problem Resolution Team (PRT) to help assess the health of the flight battery that is currently operating aboard NASA's Wilkinson Microwave Anisotropy Probe (WMAP) and provide recommendations for battery operations to mitigate the risk of impacting science operations for the rest of the mission. This report contains the outcome of the PRT's assessment.
A Bayesian Estimate of the CMB-Large-scale Structure Cross-correlation
NASA Astrophysics Data System (ADS)
Moura-Santos, E.; Carvalho, F. C.; Penna-Lima, M.; Novaes, C. P.; Wuensche, C. A.
2016-08-01
Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs-Wolfe (ISW) effect, I.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB-LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe (WMAP9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.
Cosmic Microwave Background Data Analysis
NASA Astrophysics Data System (ADS)
Paykari, Paniez; Starck, Jean-Luc Starck
2012-03-01
About 400,000 years after the Big Bang the temperature of the Universe fell to about a few thousand degrees. As a result, the previously free electrons and protons combined and the Universe became neutral. This released a radiation which we now observe as the cosmic microwave background (CMB). The tiny fluctuations* in the temperature and polarization of the CMB carry a wealth of cosmological information. These so-called temperature anisotropies were predicted as the imprints of the initial density perturbations which gave rise to the present large-scale structures such as galaxies and clusters of galaxies. This relation between the present-day Universe and its initial conditions has made the CMB radiation one of the most preferred tools to understand the history of the Universe. The CMB radiation was discovered by radio astronomers Arno Penzias and Robert Wilson in 1965 [72] and earned them the 1978 Nobel Prize. This discovery was in support of the Big Bang theory and ruled out the only other available theory at that time - the steady-state theory. The crucial observations of the CMB radiation were made by the Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite [86]- orbited in 1989-1996. COBE made the most accurate measurements of the CMB frequency spectrum and confirmed it as being a black-body to within experimental limits. This made the CMB spectrum the most precisely measured black-body spectrum in nature. The CMB has a thermal black-body spectrum at a temperature of 2.725 K: the spectrum peaks in the microwave range frequency of 160.2 GHz, corresponding to a 1.9mmwavelength. The results of COBE inspired a series of ground- and balloon-based experiments, which measured CMB anisotropies on smaller scales over the next decade. During the 1990s, the first acoustic peak of the CMB power spectrum (see Figure 5.1) was measured with increasing sensitivity and by 2000 the BOOMERanG experiment [26] reported that the highest power fluctuations occur at scales of about one degree. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array [16], Degree Angular Scale Interferometer (DASI) [61], and the Cosmic Background Imager (CBI) [78]. DASI was the first to detect the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. In June 2001, NASA launched its second CMB mission (after COBE), Wilkinson Microwave Anisotropy Explorer (WMAP) [44], to make much more precise measurements of the CMB sky. WMAP measured the differences in the CMB temperature across the sky creating a full-sky map of the CMB in five different frequency bands. The mission also measured the CMB's E-mode and the foreground polarization. As of October 2010, the WMAP spacecraft has ended its mission after nine years of operation. Although WMAP provided very accurate measurements of the large angular-scale fluctuations in the CMB, it did not have the angular resolution to cover the smaller-scale fluctuations that had been observed by previous ground-based interferometers. A third space mission, the Planck Surveyor [1], was launched by ESA* in May 2009 to measure the CMB on smaller scales than WMAP, as well as making precise measurements of the polarization of CMB. Planck represents an advance over WMAP in several respects: it observes in higher resolution, hence allowing one to probe the CMB power spectrum to smaller scales; it has a higher sensitivity and observes in nine frequency bands rather than five, hence improving the astrophysical foreground models. The mission has a wide variety of scientific aims, including: (1) detecting the total intensity/polarization of the primordial CMB anisotropies; (2) creating a galaxy-cluster catalogue through the Sunyaev-Zel'dovich (SZ) effect [93]; (3) observing the gravitational lensing of the CMB and the integrated Sachs Wolfe (ISW) effect [82]; (4) observing bright extragalactic radio and infrared sources; (5) observing the local interstellar medium, distributed synchrotron emission, and the galactic magnetic field; (6) studying the local Solar System (planets, asteroids, comets, and the zodiacal light). Planck is expected to yield data on a number of astronomical issues by 2012. It is thought that Planck measurements will mostly be limited by the efficiency of foreground removal, rather than the detector performance or duration of the mission - this is particularly important for the polarization measurements. Technological developments over the last two decades have accelerated the progress in observational cosmology. The interplay between the new theoretical ideas and new observational data has taken cosmology from a purely theoretical domain into a field of rigorous experimental science andwe are nowin what is called the precision cosmology era. The CMB measurements have made the inflationary Big Bang theory the standard model of the early Universe. This theory predicts a roughly Gaussian distribution for the initial conditions of the Universe. The power spectrum of these fluctuations agrees well with the observations, although certain observables, such as the overall amplitude of the fluctuations, remain as free parameters of the cosmic inflation model.
Planck 2015 results: V. LFI calibration
Ade, P. A. R.; Aghanim, N.; Ashdown, M.; ...
2016-09-20
In this paper, we present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% inmore » amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. Finally, we provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.« less
Planck 2015 results: V. LFI calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Ashdown, M.
In this paper, we present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% inmore » amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. Finally, we provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.« less
Planck 2015 results. V. LFI calibration
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Romelli, E.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% in amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. We provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.
A hint of Poincaré dodecahedral topology in the WMAP first year sky map
NASA Astrophysics Data System (ADS)
Roukema, B. F.; Lew, B.; Cechowska, M.; Marecki, A.; Bajtlik, S.
2004-09-01
It has recently been suggested by Luminet et al. (\\cite{LumNat03}) that the WMAP data are better matched by a geometry in which the topology is that of a Poincaré dodecahedral model and the curvature is ``slightly'' spherical, rather than by an (effectively) infinite flat model. A general back-to-back matched circles analysis by Cornish et al. (\\cite{CSSK03}) for angular radii in the range 25-90 °, using a correlation statistic for signal detection, failed to support this. In this paper, a matched circles analysis specifically designed to detect dodecahedral patterns of matched circles is performed over angular radii in the range 1-40\\ddeg on the one-year WMAP data. Signal detection is attempted via a correlation statistic and an rms difference statistic. Extreme value distributions of these statistics are calculated for one orientation of the 36\\ddeg ``screw motion'' (Clifford translation) when matching circles, for the opposite screw motion, and for a zero (unphysical) rotation. The most correlated circles appear for circle radii of \\alpha =11 ± 1 \\ddeg, for the left-handed screw motion, but not for the right-handed one, nor for the zero rotation. The favoured six dodecahedral face centres in galactic coordinates are (\\lII,\\bII) ≈ (252\\ddeg,+65\\ddeg), (51\\ddeg,+51\\ddeg), (144\\ddeg,+38\\ddeg), (207\\ddeg,+10\\ddeg), (271\\ddeg,+3\\ddeg), (332\\ddeg,+25\\ddeg) and their opposites. The six pairs of circles independently each favour a circle angular radius of 11 ± 1\\ddeg. The temperature fluctuations along the matched circles are plotted and are clearly highly correlated. Whether or not these six circle pairs centred on dodecahedral faces match via a 36\\ddeg rotation only due to unexpected statistical properties of the WMAP ILC map, or whether they match due to global geometry, it is clear that the WMAP ILC map has some unusual statistical properties which mimic a potentially interesting cosmological signal.
NASA Technical Reports Server (NTRS)
Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, J. Richard;
2010-01-01
We present constraints on cosmological parameters based on a sample of Sunyaev-Zel'dovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148 GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives (sigma)8 = 0.851 +/- 0.115 and w = -1.14 +/- 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find (sigma)8 + 0.821 +/- 0.044 and w = -1.05 +/- 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernova which give (sigma)8 = 0.802 +/- 0.038 and w = -0.98 +/- 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.
NASA Astrophysics Data System (ADS)
Benetti, Micol; Pandolfi, Stefania; Lattanzi, Massimiliano; Martinelli, Matteo; Melchiorri, Alessandro
2013-01-01
Using the most recent data from the WMAP, ACT and SPT experiments, we update the constraints on models with oscillatory features in the primordial power spectrum of scalar perturbations. This kind of features can appear in models of inflation where slow-roll is interrupted, like multifield models. We also derive constraints for the case in which, in addition to cosmic microwave observations, we also consider the data on the spectrum of luminous red galaxies from the 7th SDSS catalog, and the SNIa Union Compilation 2 data. We have found that: (i) considering a model with features in the primordial power spectrum increases the agreement with data compared to the featureless “vanilla” ΛCDM model by Δχ2=6.7, representing an improvement with respect to the expected value Δχ2=3 for an equivalent model with three additional parameters; (ii) the uncertainty on the determination of the standard parameters is not degraded when features are included; (iii) the best fit for the features model locates the step in the primordial spectrum at a scale k≃0.005Mpc-1, corresponding to the scale where the outliers in the WMAP7 data at ℓ=22 and ℓ=40 are located.; (iv) a distinct, albeit less statistically significant peak is present in the likelihood at smaller scales, whose presence might be related to the WMAP7 preference for a negative value of the running of the scalar spectral index parameter; (v) the inclusion of the LRG-7 data does not change significantly the best fit model, but allows to better constrain the amplitude of the oscillations.
Planck CMB Anomalies: Astrophysical and Cosmological Secondary Effects and the Curse of Masking
NASA Astrophysics Data System (ADS)
Rassat, Anais
2016-07-01
Large-scale anomalies have been reported in CMB data with both WMAP and Planck data. These could be due to foreground residuals and or systematic effects, though their confirmation with Planck data suggests they are not due to a problem in the WMAP or Planck pipelines. If these anomalies are in fact primordial, then understanding their origin is fundamental to either validate the standard model of cosmology or to explore new physics. We investigate three other possible issues: 1) the trade-off between minimising systematics due to foreground contamination (with a conservative mask) and minimising systematics due to masking, 2) astrophysical secondary effects (the kinetic Doppler quadrupole and kinetic Sunyaev-Zel'dovich effect), and 3) secondary cosmological signals (the integrated Sachs-Wolfe effect). We address the masking issue by considering new procedures that use both WMAP and Planck to produce higher quality full-sky maps using the sparsity methodology (LGMCA maps). We show the impact of masking is dominant over that of residual foregrounds, and the LGMCA full-sky maps can be used without further processing to study anomalies. We consider four official Planck PR1 and two LGMCA CMB maps. Analysis of the observed CMB maps shows that only the low quadrupole and quadrupole-octopole alignment seem significant, but that the planar octopole, Axis of Evil, mirror parity and cold spot are not significant in nearly all maps considered. After subtraction of astrophysical and cosmological secondary effects, only the low quadrupole may still be considered anomalous, meaning the significance of only one anomaly is affected by secondary effect subtraction out of six anomalies considered. In the spirit of reproducible research all reconstructed maps and codes are available online.
Planck CMB anomalies: astrophysical and cosmological secondary effects and the curse of masking
NASA Astrophysics Data System (ADS)
Rassat, A.; Starck, J.-L.; Paykari, P.; Sureau, F.; Bobin, J.
2014-08-01
Large-scale anomalies have been reported in CMB data with both WMAP and Planck data. These could be due to foreground residuals and or systematic effects, though their confirmation with Planck data suggests they are not due to a problem in the WMAP or Planck pipelines. If these anomalies are in fact primordial, then understanding their origin is fundamental to either validate the standard model of cosmology or to explore new physics. We investigate three other possible issues: 1) the trade-off between minimising systematics due to foreground contamination (with a conservative mask) and minimising systematics due to masking, 2) astrophysical secondary effects (the kinetic Doppler quadrupole and kinetic Sunyaev-Zel'dovich effect), and 3) secondary cosmological signals (the integrated Sachs-Wolfe effect). We address the masking issue by considering new procedures that use both WMAP and Planck to produce higher quality full-sky maps using the sparsity methodology (LGMCA maps). We show the impact of masking is dominant over that of residual foregrounds, and the LGMCA full-sky maps can be used without further processing to study anomalies. We consider four official Planck PR1 and two LGMCA CMB maps. Analysis of the observed CMB maps shows that only the low quadrupole and quadrupole-octopole alignment seem significant, but that the planar octopole, Axis of Evil, mirror parity and cold spot are not significant in nearly all maps considered. After subtraction of astrophysical and cosmological secondary effects, only the low quadrupole may still be considered anomalous, meaning the significance of only one anomaly is affected by secondary effect subtraction out of six anomalies considered. In the spirit of reproducible research all reconstructed maps and codes will be made available for download here http://www.cosmostat.org/anomaliesCMB.html.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, J.E.; Alcaniz, J.S.; Carvalho, J.C., E-mail: javierernesto@on.br, E-mail: alcaniz@on.br, E-mail: jcarvalho@on.br
The existing degeneracy between different dark energy and modified gravity cosmologies at the background level may be broken by analyzing quantities at the perturbative level. In this work, we apply a non-parametric smoothing (NPS) method to reconstruct the expansion history of the Universe ( H ( z )) from model-independent cosmic chronometers and high- z quasar data. Assuming a homogeneous and isotropic flat universe and general relativity (GR) as the gravity theory, we calculate the non-relativistic matter perturbations in the linear regime using the H ( z ) reconstruction and realistic values of Ω {sub m} {sub 0} and σ{submore » 8} from Planck and WMAP-9 collaborations. We find a good agreement between the measurements of the growth rate and f σ{sub 8}( z ) from current large-scale structure observations and the estimates obtained from the reconstruction of the cosmic expansion history. Considering a recently proposed null test for GR using matter perturbations, we also apply the NPS method to reconstruct f σ{sub 8}( z ). For this case, we find a ∼ 3σ tension (good agreement) with the standard relativistic cosmology when the Planck (WMAP-9) priors are used.« less
Fermi-Lat and WMAP Observations of the Puppis a Supernova Remnant
NASA Technical Reports Server (NTRS)
Hewitt, John William; Grondin, M. H.; Lemoine-Goumard, M.; Reposeur, T.; Ballet, J.; Tanaka, T.
2012-01-01
We report the detection of GeV gamma-ray emission from the supernova remnant Puppis A with the Fermi Gamma-Ray Space Telescope. Puppis A is among the faintest supernova remnants yet detected at GeV energies, with a luminosity of only 2.7×10(exp 34) (D/2.2 kpc)(exp 2) erg s(exp -1) between 1 and 100 GeV. The gamma-ray emission from the remnant is spatially extended, with a morphology matching that of the radio and X-ray emission, and is well described by a simple power law with an index of 2.1. We attempt to model the broadband spectral energy distribution, from radio to gamma-rays, using standard nonthermal emission mechanisms. To constrain the relativistic electron population we use 7 years of WMAP data to extend the radio spectrum up to 93 GHz. Both leptonic and hadronic dominated models can reproduce the nonthermal spectral energy distribution, requiring a total content of cosmic ray (CR) electrons and protons accelerated in Puppis A of at least WCR is approx. (1 - 5)×10 (exp 49) erg.
NASA Technical Reports Server (NTRS)
Cudmore, Alan; Leath, Tim; Ferrer, Art; Miller, Todd; Walters, Mark; Savadkin, Bruce; Wu, Ji-Wei; Slegel, Steve; Stagmer, Emory
2007-01-01
The command-and-data-handling (C&DH) software of the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft functions as the sole interface between (1) the spacecraft and its instrument subsystem and (2) ground operations equipment. This software includes a command-decoding and -distribution system, a telemetry/data-handling system, and a data-storage-and-playback system. This software performs onboard processing of attitude sensor data and generates commands for attitude-control actuators in a closed-loop fashion. It also processes stored commands and monitors health and safety functions for the spacecraft and its instrument subsystems. The basic functionality of this software is the same of that of the older C&DH software of the Rossi X-Ray Timing Explorer (RXTE) spacecraft, the main difference being the addition of the attitude-control functionality. Previously, the C&DH and attitude-control computations were performed by different processors because a single RXTE processor did not have enough processing power. The WMAP spacecraft includes a more-powerful processor capable of performing both computations.
A Close Look At The Relationship Between WMAP (ILC) Small-Scale Features And Galactic HI Structure
NASA Astrophysics Data System (ADS)
Verschuur, Gerrit L.
2012-05-01
Galactic HI emission profiles surrounding two pairs of features located where large-scale filaments at very different velocities overlap were decomposed into Gaussian components. Families of components defined by similarity of center velocities and line widths were identified and found to be spatially related. Each of the two pairs of HI peaks straddle a high-frequency continuum source revealed in the WMAP survey data. It is suggested that where filamentary HI features are directly interacting high-frequency continuum radiation is being produced. The previously hypothesized mechanism for producing high-frequency continuum radiation involving free-free emission from electrons in the interstellar medium, in this case created where HI filaments interact to produce fractional ionizations of order 5 to 15%, fit the data very closely. The results confirm that WMAP data on small-scale structures believed to be cosmological in origin are in fact compromised by the presence of intervening galactic sources of interstellar electrons clumped on scales typical of interstellar HI structure.
Väliviita, Jussi; Muhonen, Vesa
2003-09-26
In general correlated models, in addition to the usual adiabatic component with a spectral index n(ad1) there is another adiabatic component with a spectral index n(ad2) generated by entropy perturbation during inflation. We extend the analysis of a correlated mixture of adiabatic and isocurvature cosmic microwave background fluctuations of the Wilkinson Microwave Anisotropy Probe (WMAP) group, who set the two adiabatic spectral indices equal. Allowing n(ad1) and n(ad2) to vary independently we find that the WMAP data favor models where the two adiabatic components have opposite spectral tilts. Using the WMAP data only, the 2sigma upper bound for the isocurvature fraction f(iso) of the initial power spectrum at k(0)=0.05 Mpc(-1) increases somewhat, e.g., from 0.76 of n(ad2)=n(ad1) models to 0.84 with a prior n(iso)<1.84 for the isocurvature spectral index.
Big bang nucleosynthesis: An update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olive, Keith A.
An update on the standard model of big bang nucleosynthesis (BBN) is presented. With the value of the baryon-tophoton ratio determined to high precision by WMAP, standard BBN is a parameter-free theory. In this context, the theoretical prediction for the abundances of D, {sup 4}He, and {sup 7}Li is discussed and compared to their observational determination. While concordance for D and {sup 4}He is satisfactory, the prediction for {sup 7}Li exceeds the observational determination by a factor of about four. Possible solutions to this problem are discussed.
Testing the cosmological principle of isotropy: local power-spectrum estimates of the WMAP data
NASA Astrophysics Data System (ADS)
Hansen, F. K.; Banday, A. J.; Górski, K. M.
2004-11-01
We apply the Gabor transform methodology proposed by Hansen et al. to the WMAP data in order to test the statistical properties of the cosmic microwave background (CMB) fluctuation field and specifically to evaluate the fundamental assumption of cosmological isotropy. In particular, we apply the transform with several apodization scales, thus allowing the determination of the positional dependence of the angular power spectrum with either high spatial localization or high angular resolution (i.e. narrow bins in multipole space). Practically, this implies that we estimate the angular power spectrum locally in discs of various sizes positioned in different directions: small discs allow the greatest sensitivity to positional dependence, whereas larger discs allow greater sensitivity to variations over different angular scales. In addition, we determine whether the spatial position of a few outliers in the angular power spectrum could suggest the presence of residual foregrounds or systematic effects. For multipoles close to the first peak, the most deviant local estimates from the best-fitting WMAP model are associated with a few particular areas close to the Galactic plane. Such deviations also include the `dent' in the spectrum just shortward of the first peak which was remarked upon by the WMAP team. Estimating the angular power spectrum excluding these areas gives a slightly higher first Doppler peak amplitude. Finally, we probe the isotropy of the largest angular scales by estimating the power spectrum on hemispheres and reconfirm strong indications of a north-south asymmetry previously reported by other authors. Indeed, there is a remarkable lack of power in a region associated with the North ecliptic Pole. With the greater fidelity in l-space allowed by this larger sky coverage, we find tentative evidence for residual foregrounds in the range l= 2-4, which could be associated with the low measured quadrupole amplitudes and other anomalies on these angular scales (e.g. planarity and alignment). However, over the range l= 5-40 the observed asymmetry is much harder to explain in terms of residual foregrounds and known systematic effects. By reorienting the coordinate axes, we partition the sky into different hemispheres and search for the reference frame which maximizes the asymmetric distribution of power. The North Pole for this coordinate frame is found to intersect the sphere at (80°, 57°) in Galactic colatitude and longitude over almost the entire multipole range l= 5-40. Furthermore, the strong negative outlier at l= 21 and the strong positive outlier at l= 39, as determined from the global power spectrum by the WMAP team, are found to be associated with the Northern and Southern hemispheres, respectively (in this frame of maximum asymmetry). Thus, these two outliers follow the general tendency of the multipoles l= 5-40 to be of systematically lower amplitude in the north and higher in the south. Such asymmetric distributions of power on the sky provide a serious test for the cosmological principle of isotropy.
WMAP - A Glimpse of the Early Universe
NASA Technical Reports Server (NTRS)
Wollack, Edward
2009-01-01
The early Universe was incredibly hot, dense, and homogeneous. A powerful probe of this time is provided by the relic radiation which we refer to today as the Cosmic Microwave Background (CMB). Images produced from this light contain the earliest glimpse of the Universe after the "Big Bang" and the signature of the evolution of its contents. By exploiting these clues, precise constraints on the age, mass density, and geometry of the early Universe can be derived. The history of this intriguing cosmological detective story will be reviewed. Recent results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP) will be presented.
NASA Astrophysics Data System (ADS)
Clery, Daniel
2009-04-01
For the last seven years, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) has kept a lonely vigil in an area in space some 1.5 million kilometres further out from the Sun beyond the Earth. Known as Lagrange point L2, it is where a space probe can usefully hover, little disturbed by stray signals from home and without having to use much fuel to keep it in position. But WMAP will soon have company: two groundbreaking missions from the European Space Agency (ESA), due to be launched on the same Ariane-5 rocket later this month, will take up their positions next to NASA's craft.
Taking the Measure of the Universe: Cosmology from the WMAP Mission
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2007-01-01
The data from the first three years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission will be discussed.
Particle transport and stochastic acceleration in the giant lobes of Centaurus A
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Sullivan, Stephen
2011-09-22
The conditions within the giant lobes of Centaurus A are reviewed in light of recent radio and {gamma}-ray observations. Data from WMAP and ground-based telescopes in conjunction with measurements from Fermi-LAT constrain the characteristic field strength and the maximum electron energy. The implications for the transport of energetic particles are discussed in terms of residence times and cooling times within the lobes. Acceleration of electrons and UHECR via the second order Fermi mechanism is discussed.
Constraints on single entity driven inflationary and radiation eras
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Chen, Pisin; Liu, Yen-Wei
2012-07-01
We present a model that attempts to fuse the inflationary era and the subsequent radiation dominated era under a unified framework so as to provide a smooth transition between the two. The model is based on a modification of the generalized Chaplygin gas. We constrain the model observationally by mapping the primordial power spectrum of the scalar perturbations to the latest data of WMAP7. We compute as well the spectrum of the primordial gravitational waves as would be measured today.
Bayesian data analysis in observational comparative effectiveness research: rationale and examples.
Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M
2013-11-01
Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Michelle; Page, Lyman; Dunkley, Joanna
In 1969 Edward Conklin measured the anisotropy in celestial emission at 8 GHz with a resolution of 16.{sup 0}2 and used the data to report a detection of the cosmic microwave background dipole. Given the paucity of 8 GHz observations over large angular scales and the clear evidence for non-power-law Galactic emission near 8 GHz, a new analysis of Conklin's data is informative. In this paper, we compare Conklin's data to that from Haslam et al. (0.4 GHz), Reich and Reich (1.4 GHz), and the Wilkinson Microwave Anisotropy Probe (WMAP; 23-94 GHz). We show that the spectral index between Conklin'smore » data and the 23 GHz WMAP data is {beta} = -1.7 {+-} 0.1, where we model the emission temperature as T{proportional_to}{nu}{sup {beta}}. Free-free emission has {beta} Almost-Equal-To - 2.15 and synchrotron emission has {beta} Almost-Equal-To - 2.7 to -3. Thermal dust emission ({beta} Almost-Equal-To 1.7) is negligible at 8 GHz. We conclude that there must be another distinct non-power-law component of diffuse foreground emission that emits near 10 GHz, consistent with other observations in this frequency range. By comparing to the full complement of data sets, we show that a model with an anomalous emission component, assumed to be spinning dust, is preferred over a model without spinning dust at 5{sigma} ({Delta}{chi}{sup 2} = 31). However, the source of the new component cannot be determined uniquely.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mody, Krishnan; Hajian, Amir, E-mail: kmody@princeton.edu, E-mail: ahajian@cita.utoronto.ca
We present our measurement of the 'bulk flow' using the kinetic Sunyaev-Zel'dovich (kSZ) effect in the Wilkinson Microwave Anisotropy Probe (WMAP) seven-year data. As the tracer of peculiar velocities, we use Planck Early Sunyaev-Zel'dovich Detected Cluster Catalog and a compilation of X-ray-detected galaxy cluster catalogs based on ROSAT All-Sky Survey. We build a full-sky kSZ template and fit it to the WMAP data in W band. Using a Wiener filter we maximize the signal-to-noise ratio of the kSZ cluster signal in the data. We find no significant detection of the bulk flow, and our results are consistent with the {Lambda}CDMmore » prediction.« less
FERMI LAT and WMAP observations of the supernova remnant HB 21
Pivato, Giovanna; Hewitt, John W.; Tibaldo, L.; ...
2013-12-04
Here, we present the analysis of Fermi Large Area Telescope γ-ray observations of HB 21 (G89.0+4.7). We detect significant γ-ray emission associated with the remnant: the flux >100 MeV is 9.4 ± 0.8 (stat) ± 1.6 (syst) × 10 –11 erg cm –2 s –1. HB 21 is well modeled by a uniform disk centered at l = 88fdg75 ± 0fdg04, b = +4fdg65 ± 0fdg06 with a radius of 1fdg19 ± 0fdg06. The γ-ray spectrum shows clear evidence of curvature, suggesting a cutoff or break in the underlying particle population at an energy of a few GeV. We complementmore » γ-ray observations with the analysis of the WMAP 7 yr data from 23 to 93 GHz, achieving the first detection of HB 21 at these frequencies. In combination with archival radio data, the radio spectrum shows a spectral break, which helps to constrain the relativistic electron spectrum, and, in turn, parameters of simple non-thermal radiation models. In one-zone models multiwavelength data favor the origin of γ rays from nucleon-nucleon collisions. A single population of electrons cannot produce both γ rays through bremsstrahlung and radio emission through synchrotron radiation. A predominantly inverse-Compton origin of the γ-ray emission is disfavored because it requires lower interstellar densities than are inferred for HB 21. In the hadronic-dominated scenarios, accelerated nuclei contribute a total energy of ~3 × 10 49 erg, while, in a two-zone bremsstrahlung-dominated scenario, the total energy in accelerated particles is ~1 × 10 49 erg.« less
NASA Astrophysics Data System (ADS)
Hinshaw, G.; Weiland, J. L.; Hill, R. S.; Odegard, N.; Larson, D.; Bennett, C. L.; Dunkley, J.; Gold, B.; Greason, M. R.; Jarosik, N.; Komatsu, E.; Nolta, M. R.; Page, L.; Spergel, D. N.; Wollack, E.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wright, E. L.
2009-02-01
We present new full-sky temperature and polarization maps in five frequency bands from 23 to 94 GHz, based on data from the first five years of the Wilkinson Microwave Anisotropy Probe (WMAP) sky survey. The new maps are consistent with previous maps and are more sensitive. The five-year maps incorporate several improvements in data processing made possible by the additional years of data and by a more complete analysis of the instrument calibration and in-flight beam response. We present several new tests for systematic errors in the polarization data and conclude that W-band polarization data is not yet suitable for cosmological studies, but we suggest directions for further study. We do find that Ka-band data is suitable for use; in conjunction with the additional years of data, the addition of Ka band to the previously used Q- and V-band channels significantly reduces the uncertainty in the optical depth parameter, τ. Further scientific results from the five-year data analysis are presented in six companion papers and are summarized in Section 7 of this paper. With the five-year WMAP data, we detect no convincing deviations from the minimal six-parameter ΛCDM model: a flat universe dominated by a cosmological constant, with adiabatic and nearly scale-invariant Gaussian fluctuations. Using WMAP data combined with measurements of Type Ia supernovae and Baryon Acoustic Oscillations in the galaxy distribution, we find (68% CL uncertainties): Ω b h 2 = 0.02267+0.00058 -0.00059, Ω c h 2 = 0.1131 ± 0.0034, ΩΛ = 0.726 ± 0.015, ns = 0.960 ± 0.013, τ = 0.084 ± 0.016, and Δ_{R}^2 = (2.445± 0.096)× 10^{-9} at k = 0.002 Mpc-1. From these we derive σ8 = 0.812 ± 0.026, H 0 = 70.5 ± 1.3 km s-1 Mpc-1, Ω b = 0.0456 ± 0.0015, Ω c = 0.228 ± 0.013, Ω m h 2 = 0.1358+0.0037 -0.0036, z reion = 10.9 ± 1.4, and t 0 = 13.72 ± 0.12 Gyr. The new limit on the tensor-to-scalar ratio is r < 0.22(95%CL), while the evidence for a running spectral index is insignificant, dns /dln k = -0.028 ± 0.020 (68% CL). We obtain tight, simultaneous limits on the (constant) dark energy equation of state and the spatial curvature of the universe: -0.14 < 1 + w < 0.12(95%CL) and -0.0179 < Ω k < 0.0081(95%CL). The number of relativistic degrees of freedom, expressed in units of the effective number of neutrino species, is found to be N eff = 4.4 ± 1.5 (68% CL), consistent with the standard value of 3.04. Models with N eff = 0 are disfavored at >99.5% confidence. Finally, new limits on physically motivated primordial non-Gaussianity parameters are -9 < f local NL < 111 (95% CL) and -151 < f equil NL < 253 (95% CL) for the local and equilateral models, respectively. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
BATSE gamma-ray burst line search. 2: Bayesian consistency methodology
NASA Technical Reports Server (NTRS)
Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.
1994-01-01
We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.
Component separation for cosmic microwave background radiation
NASA Astrophysics Data System (ADS)
Fernández-Cobos, R.; Vielva, P.; Barreiro, R. B.; Martínez-González, E.
2011-11-01
Cosmic microwave background (CMB) radiation data obtained by different experiments contains, besides the desired signal, a superposition of microwave sky contributions mainly due to, on the one hand, synchrotron radiation, free-free emission and re-emission of dust clouds in our galaxy; and, on the other hand, extragalactic sources. We present an analytical method, using a wavelet decomposition on the sphere, to recover the CMB signal from microwave maps. Being applied to both temperature and polarization data, it is shown as a significant powerful tool when it is used in particularly polluted regions of the sky. The applied wavelet has the advantages of requiring little computering time in its calculations being adapted to the HEALPix pixelization scheme (which is the format that the community uses to report the CMB data) and offering the possibility of multi-resolution analysis. The decomposition is implemented as part of a template fitting method, minimizing the variance of the resulting map. The method was tested with simulations of WMAP data and results have been positive, with improvements up to 12% in the variance of the resulting full sky map and about 3% in low contaminate regions. Finally, we also present some preliminary results with WMAP data in the form of an angular cross power spectrum C_ℓ^{TE}, consistent with the spectrum offered by WMAP team.
Estimation of primordial spectrum with post-WMAP 3-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Souradeep, Tarun
2008-07-15
In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less
On the impact of large angle CMB polarization data on cosmological parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lattanzi, Massimiliano; Mandolesi, Nazzareno; Natoli, Paolo
We study the impact of the large-angle CMB polarization datasets publicly released by the WMAP and Planck satellites on the estimation of cosmological parameters of the ΛCDM model. To complement large-angle polarization, we consider the high resolution (or 'high-ℓ') CMB datasets from either WMAP or Planck as well as CMB lensing as traced by Planck 's measured four point correlation function. In the case of WMAP, we compute the large-angle polarization likelihood starting over from low resolution frequency maps and their covariance matrices, and perform our own foreground mitigation technique, which includes as a possible alternative Planck 353 GHz datamore » to trace polarized dust. We find that the latter choice induces a downward shift in the optical depth τ, roughly of order 2σ, robust to the choice of the complementary high resolution dataset. When the Planck 353 GHz is consistently used to minimize polarized dust emission, WMAP and Planck 70 GHz large-angle polarization data are in remarkable agreement: by combining them we find τ = 0.066 {sup +0.012}{sub −0.013}, again very stable against the particular choice for high-ℓ data. We find that the amplitude of primordial fluctuations A {sub s} , notoriously degenerate with τ, is the parameter second most affected by the assumptions on polarized dust removal, but the other parameters are also affected, typically between 0.5 and 1σ. In particular, cleaning dust with Planck 's 353 GHz data imposes a 1σ downward shift in the value of the Hubble constant H {sub 0}, significantly contributing to the tension reported between CMB based and direct measurements of the present expansion rate. On the other hand, we find that the appearance of the so-called low ℓ anomaly, a well-known tension between the high- and low-resolution CMB anisotropy amplitude, is not significantly affected by the details of large-angle polarization, or by the particular high-ℓ dataset employed.« less
Planck 2015 results. X. Diffuse component separation: Foreground maps
NASA Astrophysics Data System (ADS)
Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Orlando, E.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps and the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.´5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100-353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, R.; Ade, P. A. R.; Aghanim, N.
We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less
Planck 2015 results: X. Diffuse component separation: Foreground maps
Adam, R.; Ade, P. A. R.; Aghanim, N.; ...
2016-09-20
We report that Planck has mapped the microwave sky in temperature over nine frequency bands between 30 and 857 GHz and in polarization over seven frequency bands between 30 and 353 GHz in polarization. In this paper we consider the problem of diffuse astrophysical component separation, and process these maps within a Bayesian framework to derive an internally consistent set of full-sky astrophysical component maps. Component separation dedicated to cosmic microwave background (CMB) reconstruction is described in a companion paper. For the temperature analysis, we combine the Planck observations with the 9-yr Wilkinson Microwave Anisotropy Probe (WMAP) sky maps andmore » the Haslam et al. 408 MHz map, to derive a joint model of CMB, synchrotron, free-free, spinning dust, CO, line emission in the 94 and 100 GHz channels, and thermal dust emission. Full-sky maps are provided for each component, with an angular resolution varying between 7.5 and 1deg. Global parameters (monopoles, dipoles, relative calibration, and bandpass errors) are fitted jointly with the sky model, and best-fit values are tabulated. For polarization, the model includes CMB, synchrotron, and thermal dust emission. These models provide excellent fits to the observed data, with rms temperature residuals smaller than 4μK over 93% of the sky for all Planck frequencies up to 353 GHz, and fractional errors smaller than 1% in the remaining 7% of the sky. The main limitations of the temperature model at the lower frequencies are internal degeneracies among the spinning dust, free-free, and synchrotron components; additional observations from external low-frequency experiments will be essential to break these degeneracies. The main limitations of the temperature model at the higher frequencies are uncertainties in the 545 and 857 GHz calibration and zero-points. For polarization, the main outstanding issues are instrumental systematics in the 100–353 GHz bands on large angular scales in the form of temperature-to-polarization leakage, uncertainties in the analogue-to-digital conversion, and corrections for the very long time constant of the bolometer detectors, all of which are expected to improve in the near future.« less
Searching for oscillations in the primordial power spectrum. II. Constraints from Planck data
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Spergel, David N.; Wandelt, Benjamin D.
2014-03-01
In this second of two papers we apply our recently developed code to search for resonance features in the Planck CMB temperature data. We search both for log-spaced oscillations or linear-spaced oscillations and compare our findings with results of our WMAP9 analysis and the Planck team analysis [P. A. R. Ade et al. (Planck Collaboration>), arXiv:1303.5082]. While there are hints of log-spaced resonant features present in the WMAP9 data, the significance of these features weaken with more data. With more accurate small scale measurements, we also find that the best-fit frequency has shifted and the amplitude has been reduced. We confirm the presence of a several low frequency peaks, earlier identified by the Planck team, but with a better improvement of fit (Δχeff2˜12). We further investigate this improvement by allowing the lensing potential to vary as well, showing mild correlation between the amplitude of the oscillations and the lensing amplitude. We find that the improvement of the fit increases even more (Δχeff2˜14) for the low frequencies that modify the spectrum in a way that mimics the lensing effect. Since these features were not present in the WMAP data, they are primarily due to better measurements of Planck at small angular scales. For linear-spaced oscillations we find a maximum Δχeff2˜13 scanning two orders of magnitude in frequency space, and the biggest improvements are at extremely high frequencies. Again, we recover a best-fit frequency very close to the one found in WMAP9, which confirms that the fit improvement is driven by low ℓ. Further comparisons with WMAP9 show Planck contains many more features, both for linear- and log-spaced oscillations, but with a smaller improvement of fit. We discuss the improvement as a function of the number of modes and study the effect of the 217 GHz map, which appears to drive most of the improvement for log-spaced oscillations. Two points strongly suggest that the detected features are fitting a combination of the noise and the dip at ℓ˜1800 in the 217 GHz map: the fit improvement mostly comes from a small range of ℓ, and comparison with simulations shows that the fit improvement is consistent with a statistical fluctuation. We conclude that none of the detected features are statistically significant.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Cosmic microwave background anomalies in an open universe.
Liddle, Andrew R; Cortês, Marina
2013-09-13
We argue that the observed large-scale cosmic microwave anomalies, discovered by WMAP and confirmed by the Planck satellite, are most naturally explained in the context of a marginally open universe. Particular focus is placed on the dipole power asymmetry, via an open universe implementation of the large-scale gradient mechanism of Erickcek et al. Open inflation models, which are motivated by the string landscape and which can excite "supercurvature" perturbation modes, can explain the presence of a very-large-scale perturbation that leads to a dipole modulation of the power spectrum measured by a typical observer. We provide a specific implementation of the scenario which appears compatible with all existing constraints.
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
NASA Astrophysics Data System (ADS)
Zhao, W.; Baskaran, D.; Grishchuk, L. P.
2009-10-01
The relic gravitational waves are the cleanest probe of the violent times in the very early history of the Universe. They are expected to leave signatures in the observed cosmic microwave background anisotropies. We significantly improved our previous analysis [W. Zhao, D. Baskaran, and L. P. Grishchuk, Phys. Rev. DPRVDAQ1550-7998 79, 023002 (2009)10.1103/PhysRevD.79.023002] of the 5-year WMAP TT and TE data at lower multipoles ℓ. This more general analysis returned essentially the same maximum likelihood result (unfortunately, surrounded by large remaining uncertainties): The relic gravitational waves are present and they are responsible for approximately 20% of the temperature quadrupole. We identify and discuss the reasons by which the contribution of gravitational waves can be overlooked in a data analysis. One of the reasons is a misleading reliance on data from very high multipoles ℓ and another a too narrow understanding of the problem as the search for B modes of polarization, rather than the detection of relic gravitational waves with the help of all correlation functions. Our analysis of WMAP5 data has led to the identification of a whole family of models characterized by relatively high values of the likelihood function. Using the Fisher matrix formalism we formulated forecasts for Planck mission in the context of this family of models. We explore in detail various “optimistic,” “pessimistic,” and “dream case” scenarios. We show that in some circumstances the B-mode detection may be very inconclusive, at the level of signal-to-noise ratio S/N=1.75, whereas a smarter data analysis can reveal the same gravitational wave signal at S/N=6.48. The final result is encouraging. Even under unfavorable conditions in terms of instrumental noises and foregrounds, the relic gravitational waves, if they are characterized by the maximum likelihood parameters that we found from WMAP5 data, will be detected by Planck at the level S/N=3.65.
Searching for the missing baryons with the VSA and WMAP
NASA Astrophysics Data System (ADS)
Genova-Santos, Ricardo
2004-12-01
The hot diffuse gas in the local Universe which could host the missing baryons, could produce detectable thermal Sunyaev-Zel’dovich effect (tSZE). With this aim, in this work, I present the discussion of the search of this gas, via two different ways. Both takes into account this fact: Firstly, the search for the imprint of the tSZE in the first year data of the WMAP satellite, by applying a pixel to pixel correlation method between this data and a template constructed from the Two Micron All Sky Survey (2MASS) Extended Source Catalogue, which it has been assumed that trace the distribution of this hot gas. This analysis has yielded a detection of 35 7 µK in ¢ ¡ the 26 d eg2 of the sky containing the largest projected galaxy density. Nevertheless, this signal is mostly due to the contribution from galaxy clusters subtending an angular size of 20 30 . When ¡ £ the regions affected by the clusters are removed from the analysis, it is found a decrement of 96 37 µK in 0 8 d eg2 of the sky. Nevertheless, most of this signal comes from five different ¢ ¡ ¤ cluster candidates in the Zone of Avoidance (ZoA), present in the Clusters in the ZoA catalogue (CIZA). Hence, it is not found any clear evidence of structures larger than clusters, as it would be the case of this hot gas, contributing to the tSZE signal in the WMAP data. Secondly, interferometric imaging at 33 GH z of the well known Corona Borealis supercluster with the Very Small Array (VSA). The maps built up from these observations, apart from the common Cosmic Microwave Background (CMB) primordial fluctuations, show the presence of two intriguing strong negative features near the centre of the core of the supercluster [1]. It is discussed the possibility of being caused by CMB fluctuations, or by tSZ signals related to either unknown distant galaxy clusters or to diffuse extended warm/hot gas.
NASA Astrophysics Data System (ADS)
Liu, Xiangkun; Li, Baojiu; Zhao, Gong-Bo; Chiu, Mu-Chen; Fang, Wei; Pan, Chuzhong; Wang, Qiao; Du, Wei; Yuan, Shuo; Fu, Liping; Fan, Zuhui
2016-07-01
In this Letter, we report the observational constraints on the Hu-Sawicki f (R ) theory derived from weak lensing peak abundances, which are closely related to the mass function of massive halos. In comparison with studies using optical or x-ray clusters of galaxies, weak lensing peak analyses have the advantages of not relying on mass-baryonic observable calibrations. With observations from the Canada-France-Hawaii-Telescope Lensing Survey, our peak analyses give rise to a tight constraint on the model parameter |fR 0| for n =1 . The 95% C.L. is log10|fR 0|<-4.82 given WMAP9 priors on (Ωm , As ). With Planck15 priors, the corresponding result is log10|fR 0|<-5.16 .
The Weird Side of the Universe: Preferred Axis
NASA Astrophysics Data System (ADS)
Zhao, Wen; Santos, Larissa
In both WMAP and Planck observations on the temperature anisotropy of cosmic microwave background (CMB) radiation a number of large-scale anomalies were discovered in the past years, including the CMB parity asymmetry in the low multipoles. By defining a directional statistics, we find that the CMB parity asymmetry is directional dependent, and the preferred axis is stable, which means that it is independent of the chosen CMB map, the definition of the statistic, or the CMB masks. Meanwhile, we find that this preferred axis strongly aligns with those of the CMB quadrupole, octopole, as well as those of other large-scale observations. In addition, all of them aligns with the CMB kinematic dipole, which hints to the non-cosmological origin of these directional anomalies in cosmological observations.
Constraints on wrapped Dirac-Born-Infeld inflation in a warped throat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Takeshi; Mukohyama, Shinji; Kinoshita, Shunichiro, E-mail: tkobayashi@utap.phys.s.u-tokyo.ac.jp, E-mail: mukoyama@phys.s.u-tokyo.ac.jp, E-mail: kinoshita@utap.phys.s.u-tokyo.ac.jp
2008-01-15
We derive constraints on the tensor to scalar ratio and on the background charge of the warped throat for Dirac-Born-Infeld inflation driven by D5- and D7-branes wrapped over cycles of the throat. It is shown that background charge well beyond the known maximal value is required in most cases for Dirac-Born-Infeld inflation to generate cosmological observables compatible with the WMAP3 (Wilkinson Microwave Anisotropy Probe 3) data. Most of the results derived in this paper are insensitive to the details of the inflaton potential, and could be applied to generic warped throats.
Origin of ΔN{sub eff} as a result of an interaction between dark radiation and dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjaelde, Ole Eggers; Das, Subinoy; Moss, Adam, E-mail: oeb@phys.au.dk, E-mail: subinoy@physik.rwth-aachen.de, E-mail: Adam.Moss@nottingham.ac.uk
2012-10-01
Results from the Wilkinson Microwave Anisotropy Probe (WMAP), Atacama Cosmology Telescope (ACT) and recently from the South Pole Telescope (SPT) have indicated the possible existence of an extra radiation component in addition to the well known three neutrino species predicted by the Standard Model of particle physics. In this paper, we explore the possibility of the apparent extra dark radiation being linked directly to the physics of cold dark matter (CDM). In particular, we consider a generic scenario where dark radiation, as a result of an interaction, is produced directly by a fraction of the dark matter density effectively decayingmore » into dark radiation. At an early epoch when the dark matter density is negligible, as an obvious consequence, the density of dark radiation is also very small. As the Universe approaches matter radiation equality, the dark matter density starts to dominate thereby increasing the content of dark radiation and changing the expansion rate of the Universe. As this increase in dark radiation content happens naturally after Big Bang Nucleosynthesis (BBN), it can relax the possible tension with lower values of radiation degrees of freedom measured from light element abundances compared to that of the CMB. We numerically confront this scenario with WMAP+ACT and WMAP+SPT data and derive an upper limit on the allowed fraction of dark matter decaying into dark radiation.« less
Galactic magnetic deflections and Centaurus A as a UHECR source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Glennys R.; Jansson, Ronnie; Feain, Ilana J.
2013-01-01
We evaluate the validity of leading models of the Galactic magnetic field for predicting UHECR deflections from Cen A. The Jansson-Farrar 2012 GMF model (JF12), which includes striated and random components as well as an out-of-plane contribution to the regular field not considered in other models, gives by far the best fit globally to all-sky data including the WMAP7 22 GHz synchrotron emission maps for Q, U and I and ≈ 40,000 extragalactic Rotation Measures (RMs). Here we test the models specifically in the Cen A region, using 160 well-measured RMs and the Polarized Intensity from WMAP, nearby but outsidemore » the Cen A radio lobes. The JF12 model predictions are in excellent agreement with the observations, justifying confidence in its predictions for deflections of UHECRs from Cen A. We find that up to six of the 69 Auger events above 55 EeV are consistent with originating in Cen A and being deflected ≤ 18°; in this case three are protons and three have Z = 2−4. Others of the 13 events within 18° must have another origin. In order for a random extragalactic magnetic field between Cen A and the Milky Way to appreciably alter these conclusions, its strength would have to be ∼>80 nG — far larger than normally imagined.« less
High-precision spectra for dynamical Dark Energy cosmologies from constant-w models
NASA Astrophysics Data System (ADS)
Casarini, Luciano
2010-08-01
Spanning the whole functional space of cosmologies with any admissible DE state equations w(a) seems a need, in view of forthcoming observations, namely those aiming to provide a tomography of cosmic shear. In this paper I show that this duty can be eased and that a suitable use of results for constant-w cosmologies can be sufficient. More in detail, I ``assign'' here six cosmologies, aiming to span the space of state equations w(a) = wo+wa(1-a), for wo and wa values consistent with WMAP5 and WMAP7 releases and run N-body simulations to work out their non-linear fluctuation spectra at various redshifts z. Such spectra are then compared with those of suitable auxiliary models, characterized by constant w. For each z a different auxiliary model is needed. Spectral discrepancies between the assigned and the auxiliary models, up to k simeq 2-3 h Mpc-1, are shown to keep within 1 %. Quite in general, discrepancies are smaller at greater z and exhibit a specific trend across the wo and wa plane. Besides of aiming at simplifying the evaluation of spectra for a wide range of models, this paper also outlines a specific danger for future studies of the DE state equation, as models fairly distant on the w0-wa plane can be easily confused.
Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan
NASA Astrophysics Data System (ADS)
Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung
2010-08-01
Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.
NASA Astrophysics Data System (ADS)
Kumar, Suresh; Xu, Lixin
2014-10-01
In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann-Robertson-Walker space-time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier-Polarski-Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.
Cosmic microwave background snapshots: pre-WMAP and post-WMAP.
Bond, J Richard; Contaldi, Carlo; Pogosyan, Dmitry
2003-11-15
We highlight the remarkable evolution in the cosmic microwave background (CMB) power spectrum C(l) as a function of multipole l over the past few years, and in the cosmological parameters for minimal inflation models derived from it: from anisotropy results before 2000; in 2000 and 2001 from Boomerang, Maxima and the Degree Angular Scale Interferometer (DASI), extending l to approximately 1000; and in 2002 from the Cosmic Background Imager (CBI), Very Small Array (VSA), ARCHEOPS and Arcminute Cosmology Bolometer Array Receiver (ACBAR), extending l to approximately 3000, with more from Boomerang and DASI as well. Pre-WMAP (pre-Wilkinson Microwave Anisotropy Probe) optimal band powers are in good agreement with each other and with the exquisite one-year WMAP results, unveiled in February 2003, which now dominate the l less, similar 600 bands. These CMB experiments significantly increased the case for accelerated expansion in the early Universe (the inflationary paradigm) and at the current epoch (dark energy dominance) when they were combined with "prior" probabilities on the parameters. The minimal inflation parameter set, [omega(b), omega(cdm), Omega(tot), Omega(Lambda), n(s), tau(C), sigma(8)], is applied in the same way to the evolving data. C(l) database and Monte Carlo Markov Chain (MCMC) methods are shown to give similar values, which are highly stable over time and for different prior choices, with the increasing precision best characterized by decreasing errors on uncorrelated "parameter eigenmodes". Priors applied range from weak ones to stronger constraints from the expansion rate (HST-h), from cosmic acceleration from supernovae (SN1) and from galaxy clustering, gravitational lensing and local cluster abundance (LSS). After marginalizing over the other cosmic and experimental variables for the weak + LSS prior, the pre-WMAP data of January 2003 compared with the post-WMAP data of March 2003 give Omega(tot) = 1.03(-0.04)(+0.05) compared with 1.02(-0.03)(+0.04), consistent with (non-Baroque) inflation theory. Adding the flat Omega(tot) = 1 prior, we find a nearly scale-invariant spectrum, n(s) = 0.95(-0.04)(+0.07) compared with 0.97(-0.02)(+0.02). The evidence for a logarithmic variation of the spectral tilt is less than or approximately 2sigma. The densities are for: baryons, omega(b) identical with Omega(b)h(2) = 0.0217(-0.002)(+0.002) (compared with 0.0228(-0.001)(+0.001)), near the Big Bang nucleosynthesis (BBN) estimate of 0.0214 +/- 0.002; CDM, omega(cdm) = Omega(cdm)h(2) = 0.126(-0.012)(+0.012) (compared with 0.121(-0.010)(+0.010)); the substantial dark (unclustered) energy, Omega(Lambda) approximately 0.66(-0.09)(+0.07) (compared with 0.70(-0.05)(+0.05)). The dark energy pressure-to-density ratio w(Q) is not well constrained by our weak + LSS prior, but adding SN1 gives w(Q) less than or approximately -0.7 for January 2003 and March 2003, consistent with the w(Q) = -1 cosmological constant case. We find sigma(8) = 0.89(-0.07)(+0.06) (compared with 0.86(-0.04)(+0.04)), implying a sizable Sunyaev-Zel'dovich (SZ) effect from clusters and groups; the high-l power found in the January 2003 data suggest sigma(8) approximately 0.94(-0.16)(+0.08) is needed to be SZ-compatible.
Large-angle correlations in the cosmic microwave background
NASA Astrophysics Data System (ADS)
Efstathiou, George; Ma, Yin-Zhe; Hanson, Duncan
2010-10-01
It has been argued recently by Copi et al. 2009 that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, inflationary Lambda cold dark matter (ΛCDM) cosmology. We compare various estimators of the temperature correlation function showing how they depend on assumptions of statistical isotropy and how they perform on the Wilkinson Microwave Anisotropy Probe (WMAP) 5-yr Internal Linear Combination (ILC) maps with and without a sky cut. We show that the low multipole harmonics that determine the large-scale features of the temperature correlation function can be reconstructed accurately from the data that lie outside the sky cuts. The reconstructions are only weakly dependent on the assumed statistical properties of the temperature field. The temperature correlation functions computed from these reconstructions are in good agreement with those computed from the ILC map over the whole sky. We conclude that the large-scale angular correlation function for our realization of the sky is well determined. A Bayesian analysis of the large-scale correlations is presented, which shows that the data cannot exclude the standard ΛCDM model. We discuss the differences between our results and those of Copi et al. Either there exists a violation of statistical isotropy as claimed by Copi et al., or these authors have overestimated the significance of the discrepancy because of a posteriori choices of estimator, statistic and sky cut.
Cosmology favoring extra radiation and sub-eV mass sterile neutrinos as an option.
Hamann, Jan; Hannestad, Steen; Raffelt, Georg G; Tamborra, Irene; Wong, Yvonne Y Y
2010-10-29
Precision cosmology and big-bang nucleosynthesis mildly favor extra radiation in the Universe beyond photons and ordinary neutrinos, lending support to the existence of low-mass sterile neutrinos. We use the WMAP 7-year data, small-scale cosmic microwave background observations from ACBAR, BICEP, and QuAD, the SDSS 7th data release, and measurement of the Hubble parameter from HST observations to derive credible regions for the assumed common mass scale m{s} and effective number N{s} of thermally excited sterile neutrino states. Our results are compatible with the existence of one or perhaps two sterile neutrinos, as suggested by LSND and MiniBooNE, if m{s} is in the sub-eV range.
A Bayesian observer replicates convexity context effects in figure-ground perception.
Goldreich, Daniel; Peterson, Mary A
2012-01-01
Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Ade, P A R; Ahmed, Z; Aikin, R W; Alexander, K D; Barkats, D; Benton, S J; Bischoff, C A; Bock, J J; Bowens-Rubin, R; Brevik, J A; Buder, I; Bullock, E; Buza, V; Connors, J; Crill, B P; Duband, L; Dvorkin, C; Filippini, J P; Fliescher, S; Grayson, J; Halpern, M; Harrison, S; Hilton, G C; Hui, H; Irwin, K D; Karkare, K S; Karpel, E; Kaufman, J P; Keating, B G; Kefeli, S; Kernasovskiy, S A; Kovac, J M; Kuo, C L; Leitch, E M; Lueker, M; Megerian, K G; Netterfield, C B; Nguyen, H T; O'Brient, R; Ogburn, R W; Orlando, A; Pryke, C; Richter, S; Schwarz, R; Sheehy, C D; Staniszewski, Z K; Steinbach, B; Sudiwala, R V; Teply, G P; Thompson, K L; Tolan, J E; Tucker, C; Turner, A D; Vieregg, A G; Weber, A C; Wiebe, D V; Willmert, J; Wong, C L; Wu, W L K; Yoon, K W
2016-01-22
We present results from an analysis of all data taken by the BICEP2 and Keck Array cosmic microwave background (CMB) polarization experiments up to and including the 2014 observing season. This includes the first Keck Array observations at 95 GHz. The maps reach a depth of 50 nK deg in Stokes Q and U in the 150 GHz band and 127 nK deg in the 95 GHz band. We take auto- and cross-spectra between these maps and publicly available maps from WMAP and Planck at frequencies from 23 to 353 GHz. An excess over lensed ΛCDM is detected at modest significance in the 95×150 BB spectrum, and is consistent with the dust contribution expected from our previous work. No significant evidence for synchrotron emission is found in spectra such as 23×95, or for correlation between the dust and synchrotron sky patterns in spectra such as 23×353. We take the likelihood of all the spectra for a multicomponent model including lensed ΛCDM, dust, synchrotron, and a possible contribution from inflationary gravitational waves (as parametrized by the tensor-to-scalar ratio r) using priors on the frequency spectral behaviors of dust and synchrotron emission from previous analyses of WMAP and Planck data in other regions of the sky. This analysis yields an upper limit r_{0.05}<0.09 at 95% confidence, which is robust to variations explored in analysis and priors. Combining these B-mode results with the (more model-dependent) constraints from Planck analysis of CMB temperature plus baryon acoustic oscillations and other data yields a combined limit r_{0.05}<0.07 at 95% confidence. These are the strongest constraints to date on inflationary gravitational waves.
Planck 2013 results. V. LFI calibration
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cappellini, B.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Kangaslahti, P.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leach, S.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Osborne, S.; Paci, F.; Pagano, L.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Ricciardi, S.; Riller, T.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
We discuss the methods employed to photometrically calibrate the data acquired by the Low Frequency Instrument on Planck. Our calibration is based on a combination of the orbital dipole plus the solar dipole, caused respectively by the motion of the Planck spacecraft with respect to the Sun and by motion of the solar system with respect to the cosmic microwave background (CMB) rest frame. The latter provides a signal of a few mK with the same spectrum as the CMB anisotropies and is visible throughout the mission. In this data releasewe rely on the characterization of the solar dipole as measured by WMAP. We also present preliminary results (at 44 GHz only) on the study of the Orbital Dipole, which agree with the WMAP value of the solar system speed within our uncertainties. We compute the calibration constant for each radiometer roughly once per hour, in order to keep track of changes in the detectors' gain. Since non-idealities in the optical response of the beams proved to be important, we implemented a fast convolution algorithm which considers the full beam response in estimating the signal generated by the dipole. Moreover, in order to further reduce the impact of residual systematics due to sidelobes, we estimated time variations in the calibration constant of the 30 GHz radiometers (the ones with the largest sidelobes) using the signal of an internal reference load at 4 K instead of the CMB dipole. We have estimated the accuracy of the LFI calibration following two strategies: (1) we have run a set of simulations to assess the impact of statistical errors and systematic effects in the instrument and in the calibration procedure; and (2) we have performed a number of internal consistency checks on the data and on the brightness temperature of Jupiter. Errors in the calibration of this Planck/LFI data release are expected to be about 0.6% at 44 and 70 GHz, and 0.8% at 30 GHz. Both these preliminary results at low and high ℓ are consistent with WMAP results within uncertainties and comparison of power spectra indicates good consistency in the absolute calibration with HFI (0.3%) and a 1.4σ discrepancy with WMAP (0.9%).
Order priors for Bayesian network discovery with an application to malware phylogeny
Oyen, Diane; Anderson, Blake; Sentz, Kari; ...
2017-09-15
Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less
Order priors for Bayesian network discovery with an application to malware phylogeny
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyen, Diane; Anderson, Blake; Sentz, Kari
Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less
DAMPE electron-positron excess in leptophilic Z' model
NASA Astrophysics Data System (ADS)
Ghorbani, Karim; Ghorbani, Parsa Hossein
2018-05-01
Recently the DArk Matter Particle Explorer (DAMPE) has reported an excess in the electron-positron flux of the cosmic rays which is interpreted as a dark matter particle with the mass about 1.5 TeV. We come up with a leptophilic Z' scenario including a Dirac fermion dark matter candidate which beside explaining the observed DAMPE excess, is able to pass various experimental/observational constraints including the relic density value from the WMAP/Planck, the invisible Higgs decay bound at the LHC, the LEP bounds in electron-positron scattering, the muon anomalous magnetic moment constraint, Fermi-LAT data, and finally the direct detection experiment limits from the XENON1t/LUX. By computing the electron-positron flux produced from a dark matter with the mass about 1.5 TeV we show that the model predicts the peak observed by the DAMPE.
Bayesian estimates of the incidence of rare cancers in Europe.
Botta, Laura; Capocaccia, Riccardo; Trama, Annalisa; Herrmann, Christian; Salmerón, Diego; De Angelis, Roberta; Mallone, Sandra; Bidoli, Ettore; Marcos-Gragera, Rafael; Dudek-Godeau, Dorota; Gatta, Gemma; Cleries, Ramon
2018-04-21
The RARECAREnet project has updated the estimates of the burden of the 198 rare cancers in each European country. Suspecting that scant data could affect the reliability of statistical analysis, we employed a Bayesian approach to estimate the incidence of these cancers. We analyzed about 2,000,000 rare cancers diagnosed in 2000-2007 provided by 83 population-based cancer registries from 27 European countries. We considered European incidence rates (IRs), calculated over all the data available in RARECAREnet, as a valid a priori to merge with country-specific observed data. Therefore we provided (1) Bayesian estimates of IRs and the yearly numbers of cases of rare cancers in each country; (2) the expected time (T) in years needed to observe one new case; and (3) practical criteria to decide when to use the Bayesian approach. Bayesian and classical estimates did not differ much; substantial differences (>10%) ranged from 77 rare cancers in Iceland to 14 in England. The smaller the population the larger the number of rare cancers needing a Bayesian approach. Bayesian estimates were useful for cancers with fewer than 150 observed cases in a country during the study period; this occurred mostly when the population of the country is small. For the first time the Bayesian estimates of IRs and the yearly expected numbers of cases for each rare cancer in each individual European country were calculated. Moreover, the indicator T is useful to convey incidence estimates for exceptionally rare cancers and in small countries; it far exceeds the professional lifespan of a medical doctor. Copyright © 2018 Elsevier Ltd. All rights reserved.
Can particle-creation phenomena replace dark energy?
NASA Astrophysics Data System (ADS)
Debnath, Subhra; Sanyal, Abhik Kumar
2011-07-01
Particle creation at the expense of the gravitational field might be sufficient to explain the cosmic evolution history, without the need of dark energy at all. This phenomena has been investigated in a recent work by Lima et al (Class. Quantum Grav. 2008 25 205006) assuming particle creation at the cost of gravitational energy in the late Universe. However, the model does not satisfy the WMAP constraint on the matter-radiation equality (Steigman et al 2009 J. Cosmol. Astropart. Phys. JCAP06(2009)033). Here, we have suggested a model, in the same framework, which fits perfectly with SNIa data at low redshift as well as an early integrated Sachs-Wolfe effect on the matter-radiation equality determined by WMAP at high redshift. Such a model requires the presence of nearly 26% primeval matter in the form of baryons and cold dark matter.
Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies
ERIC Educational Resources Information Center
Chen, Jianshen; Kaplan, David
2015-01-01
Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…
C-BASS: The C-Band All Sky Survey
NASA Astrophysics Data System (ADS)
Pearson, Timothy J.; C-BASS Collaboration
2016-06-01
The C-Band All Sky Survey (C-BASS) is a project to image the whole sky at a wavelength of 6 cm (frequency 5 GHz), measuring both the brightness and the polarization of the sky. Correlation polarimeters are mounted on two separate telescopes, one at the Owens Valley Observatory (OVRO) in California and another in South Africa, allowing C-BASS to map the whole sky. The OVRO instrument has completed observations for the northern part of the survey. We are working on final calibration of intensity and polarization. The southern instrument has recently started observations for the southern part of the survey from its site at Klerefontein near Carnarvon in South Africa. The principal aim of C-BASS is to allow the subtraction of polarized Galactic synchrotron emission from the data produced by CMB polarization experiments, such as WMAP, Planck, and dedicated B-mode polarization experiments. In addition it will contribute to studies of: (1) the local (< 1 kpc) Galactic magnetic field and cosmic-ray propagation; (2) the distribution of the anomalous dust emission, its origin and the physical processes that affect it; (3) modeling of Galactic total intensity emission, which may allow CMB experiments access to the currently inaccessible region close to the Galactic plane. Observations at many wavelengths from radio to infrared are needed to fully understand the foregrounds. At 5 GHz, C-BASS maps synchrotron polarization with minimal corruption by Faraday rotation, and complements the full-sky maps from WMAP and Planck. I will present the project status, show results of component separation in selected sky regions, and describe the northern survey data products.C-BASS (http://www.astro.caltech.edu/cbass/) is a collaborative project between the Universities of Oxford and Manchester in the UK, the California Institute of Technology (supported by the National Science Foundation and NASA) in the USA, the Hartebeesthoek Radio Astronomy Observatory (supported by the Square Kilometre Array project) in South Africa, and the King Abdulaziz City for Science and Technology (KACST) in Saudi Arabia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pittori, Carlotta; Cavazzuti, Elisabetta; Colafrancesco, Sergio
2011-11-29
We take into account the constraints from the observed extragalactic {gamma}-ray background to estimate the maximum duty cycle allowed for a selected sample of WMAP Blazars, in order to be detectable by AGILE and GLAST {gamma}-ray experiments. For the nominal sensitivity values of both instruments, we identify a subset of sources which can in principle be detectable also in a steady state without over-predicting the extragalactic background. This work is based on the results of a recently derived Blazar radio LogN-LogS obtained by combining several multi-frequency surveys.
Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837
ERIC Educational Resources Information Center
Levy, Roy
2014-01-01
Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
The Universe Comes into Sharper Focus
2013-03-21
This graphic illustrates the evolution of satellites designed to measure ancient light leftover from the big bang that created our universe 13.8 billion years ago; NASA COBE Explorer left and WMAP middle, and ESA Planck right.
Extreme data compression for the CMB
NASA Astrophysics Data System (ADS)
Zablocki, Alan; Dodelson, Scott
2016-04-01
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.
First measurement of the bulk flow of nearby galaxies using the cosmic microwave background
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Afshordi, Niayesh; Hudson, Michael J.
2013-04-01
Peculiar velocities in the nearby Universe can be measured via the kinetic Sunyaev-Zel'dovich (kSZ) effect. Using a statistical method based on an optimized cross-correlation with nearby galaxies, we extract the kSZ signal generated by plasma halo of galaxies from the cosmic microwave background (CMB) temperature anisotropies observed by the Wilkinson Microwave Anisotropy Probe (WMAP). Marginalizing over the thermal Sunyaev-Zel'dovich contribution from clusters of galaxies, possible unresolved point source contamination, and Galactic foregrounds, we find a kSZ bulk flow signal present at the ˜90 per cent confidence level in the seven-year WMAP data. When only galaxies within 50 h-1 Mpc are included in the kSZ template, we find a bulk flow in the CMB frame of |V| = 533 ± 263 km s-1, in the direction l = 324 ± 27, b = -7 ± 17, consistent with bulk flow measurements on a similar scale using classical distance indicators. We show how this comparison constrains, for the first time, the (ionized) baryonic budget in the local universe. On very large (˜500 h-1 Mpc) scales, we find a 95 per cent upper limit of 470 km s-1, inconsistent with some analyses of bulk flow of clusters from the kSZ. We estimate that the significance of the bulk flow signal may increase to 3σ-5σ using data from the Planck probe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajian, Amir; Bond, J. Richard; Battaglia, Nicholas
We measure a significant correlation between the thermal Sunyaev-Zel'dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 217, 545 and 857 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal dependsmore » on cosmological parameters that determine the growth of structure (σ{sub 8} and Ω M) and scales as σ{sub 8}{sup 7.4} and Ω M{sup 1.9} around the multipole (ℓ) ∼ 1000. We constrain σ{sub 8} and Ω M from the cross-power spectrum to be σ{sub 8}(Ω M/0.30){sup 0.26} = 0.8±0.02. Since this cross spectrum produces a tight constraint in the σ{sub 8} and Ω M plane the errors on a σ{sub 8} constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond σ{sub 8} and Ω M to constrain dark energy or the sum of the neutrino masses.« less
Cosmology Favoring Extra Radiation and Sub-eV Mass Sterile Neutrinos as an Option
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, Jan; Hannestad, Steen; Raffelt, Georg G.
2010-10-29
Precision cosmology and big-bang nucleosynthesis mildly favor extra radiation in the Universe beyond photons and ordinary neutrinos, lending support to the existence of low-mass sterile neutrinos. We use the WMAP 7-year data, small-scale cosmic microwave background observations from ACBAR, BICEP, and QuAD, the SDSS 7th data release, and measurement of the Hubble parameter from HST observations to derive credible regions for the assumed common mass scale m{sub s} and effective number N{sub s} of thermally excited sterile neutrino states. Our results are compatible with the existence of one or perhaps two sterile neutrinos, as suggested by LSND and MiniBooNE, ifmore » m{sub s} is in the sub-eV range.« less
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Bayesian accounts of covert selective attention: A tutorial review.
Vincent, Benjamin T
2015-05-01
Decision making and optimal observer models offer an important theoretical approach to the study of covert selective attention. While their probabilistic formulation allows quantitative comparison to human performance, the models can be complex and their insights are not always immediately apparent. Part 1 establishes the theoretical appeal of the Bayesian approach, and introduces the way in which probabilistic approaches can be applied to covert search paradigms. Part 2 presents novel formulations of Bayesian models of 4 important covert attention paradigms, illustrating optimal observer predictions over a range of experimental manipulations. Graphical model notation is used to present models in an accessible way and Supplementary Code is provided to help bridge the gap between model theory and practical implementation. Part 3 reviews a large body of empirical and modelling evidence showing that many experimental phenomena in the domain of covert selective attention are a set of by-products. These effects emerge as the result of observers conducting Bayesian inference with noisy sensory observations, prior expectations, and knowledge of the generative structure of the stimulus environment.
Flood quantile estimation at ungauged sites by Bayesian networks
NASA Astrophysics Data System (ADS)
Mediero, L.; Santillán, D.; Garrote, L.
2012-04-01
Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.
Learning Bayesian Networks from Correlated Data
NASA Astrophysics Data System (ADS)
Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola
2016-05-01
Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.
Sparse Bayesian Inference and the Temperature Structure of the Solar Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Harry P.; Byers, Jeff M.; Crump, Nicholas A.
Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of themore » solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.« less
Patchy screening of the cosmic microwave background by inhomogeneous reionization
NASA Astrophysics Data System (ADS)
Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan
2013-02-01
We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables
ERIC Educational Resources Information Center
Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan
2017-01-01
We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…
A Bitter Pill: The Cosmic Lithium Problem
NASA Astrophysics Data System (ADS)
Fields, Brian
2014-03-01
Primordial nucleosynthesis describes the production of the lightest nuclides in the first three minutes of cosmic time. We will discuss the transformative influence of the WMAP and Planck determinations of the cosmic baryon density. Coupled with nucleosynthesis theory, these measurements make tight predictions for the primordial light element abundances: deuterium observations agree spectacularly with these predictions, helium observations are in good agreement, but lithium observations (in ancient halo stars) are significantly discrepant-this is the ``lithium problem.'' Over the past decade, the lithium discrepancy has become more severe, and very recently the solution space has shrunk. A solution due to new nuclear resonances has now been essentially ruled out experimentally. Stellar evolution solutions remain viable but must be finely tuned. Observational systematics are now being probed by qualitatively new methods of lithium observation. Finally, new physics solutions are now strongly constrained by the combination of the precision baryon determination by Planck, and the need to match the D/H abundances now measured to unprecedented precision at high redshift. Supported in part by NSF grant PHY-1214082.
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field
NASA Astrophysics Data System (ADS)
Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen
2017-10-01
Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Meegan, Charles A.
1997-01-01
This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.
Bayesian statistics: estimating plant demographic parameters
James S. Clark; Michael Lavine
2001-01-01
There are times when external information should be brought tobear on an ecological analysis. experiments are never conducted in a knowledge-free context. The inference we draw from an observation may depend on everything else we know about the process. Bayesian analysis is a method that brings outside evidence into the analysis of experimental and observational data...
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Tribrid Inflation in Supergravity
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Dutta, Koushik; Kostka, Philipp M.
We propose a novel class of F-term hybrid inflation models in supergravity (SUGRA) where the η-problem is resolved using either a Heisenberg symmetry or a shift symmetry of the Kähler potential. In addition to the inflaton and the waterfall field, this class (referred to as tribrid inflation) contains a third "driving" field which contributes the large vacuum energy during inflation by its F-term. In contrast to the "standard" hybrid scenario, it has several attractive features due to the property of vanishing inflationary superpotential (Winf = 0) during inflation. Quantum corrections induced by symmetry breaking terms in the superpotential generate a slope of the potential and lead to a spectral tilt consistent with recent WMAP observations.
Extreme data compression for the CMB
Zablocki, Alan; Dodelson, Scott
2016-04-28
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less
First Year Wilkinson Microwave Anisotropy Probe(WMAP)Observations: The Angular Power Spectrum
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Spergel, D. N.; Verde, L.; Hill, R. S.; Meyer, S. S.; Barnes, C.; Bennett, C. L.; Halpern, M.; Jarosik, N.; Kogut, A.
2003-01-01
We present the angular power spectrum derived from the first-year Wilkinson Microwave Anisotropy Probe (WMAP) sky maps. We study a variety of power spectrum estimation methods and data combinations and demonstrate that the results are robust. The data are modestly contaminated by diffuse Galactic foreground emission, but we show that a simple Galactic template model is sufficient to remove the signal. Point sources produce a modest contamination in the low frequency data. After masking approximately 700 known bright sources from the maps, we estimate residual sources contribute approximately 3500 mu sq Kappa at 41 GHz, and approximately 130 mu sq Kappa at 94 GHz, to the power spectrum [iota(iota + 1)C(sub iota)/2pi] at iota = 1000. Systematic errors are negligible compared to the (modest) level of foreground emission. Our best estimate of the power spectrum is derived from 28 cross-power spectra of statistically independent channels. The final spectrum is essentially independent of the noise properties of an individual radiometer. The resulting spectrum provides a definitive measurement of the CMB power spectrum, with uncertainties limited by cosmic variance, up to iota approximately 350. The spectrum clearly exhibits a first acoustic peak at iota = 220 and a second acoustic peak at iota approximately 540, and it provides strong support for adiabatic initial conditions. Researchers have analyzed the CT(sup Epsilon) power spectrum, and present evidence for a relatively high optical depth, and an early period of cosmic reionization. Among other things, this implies that the temperature power spectrum has been suppressed by approximately 30% on degree angular scales, due to secondary scattering.
ERIC Educational Resources Information Center
Griffiths, Thomas L.; Tenenbaum, Joshua B.
2011-01-01
Predicting the future is a basic problem that people have to solve every day and a component of planning, decision making, memory, and causal reasoning. In this article, we present 5 experiments testing a Bayesian model of predicting the duration or extent of phenomena from their current state. This Bayesian model indicates how people should…
WMAP7 constraints on oscillations in the primordial power spectrum
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter
2012-03-01
We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.
A Test of Bayesian Observer Models of Processing in the Eriksen Flanker Task
ERIC Educational Resources Information Center
White, Corey N.; Brown, Scott; Ratcliff, Roger
2012-01-01
Two Bayesian observer models were recently proposed to account for data from the Eriksen flanker task, in which flanking items interfere with processing of a central target. One model assumes that interference stems from a perceptual bias to process nearby items as if they are compatible, and the other assumes that the interference is due to…
Image: NASA WMAP George F. Smoot and John Mather share the 2006 Nobel prize "for their the Universe About Cosmology Planck Satellite Launched Cosmology Videos Professor George Smoot's group science goals regarding cosmology. George Smoot named Director of Korean Cosmology Institute The GRB
Neutrino cosmology after WMAP 7-year data and LHC first Z' bounds.
Anchordoqui, Luis Alfredo; Goldberg, Haim
2012-02-24
The gauge-extended U(1)(C)×SU(2)(L)×U(1)(I(R))×U(1)(L) model elevates the global symmetries of the standard model (baryon number B and lepton number L) to local gauge symmetries. The U(1)(L) symmetry leads to three superweakly interacting right-handed neutrinos. This also renders a B-L symmetry nonanomalous. The superweak interactions of these Dirac states permit ν(R) decoupling just above the QCD phase transition: 175 is < or approximately equal to T(ν(R))(dec)/MeV is < or approximately equal to 250. In this transitional region, the residual temperature ratio between ν(L) and ν(R) generates extra relativistic degrees of freedom at BBN and at the CMB epochs. Consistency with both WMAP 7-year data and recent estimates of the primordial 4He mass fraction is achieved for 3
Cosmological Parameters From Pre-Planck CMB Measurements: A 2017 Update
NASA Technical Reports Server (NTRS)
Calabrese, Erminia; Hlolzek, Renee A.; Bond, J. Richard; Devlin, Mark J.; Dunkley, Joanna; Halpern, Mark; Hincks, Adam D.; Irwin, Kent D.; Kosowsky, Arthur; Moodley, Kavilan;
2017-01-01
We present cosmological constraints from the combination of the full mission nine-year WMAP release and small-scale temperature data from the pre-Planck Atacama Cosmology Telescope (ACT) and South Pole Telescope (SPT) generation of instruments. This is an update of the analysis presented in Calabrese et al. [Phys. Rev. D 87, 103012 (2013)], and highlights the impact on CDM cosmology of a 0.06 eV massive neutrino which was assumed in the Planck analysis but not in the ACTSPT analyses and a Planck-cleaned measurement of the optical depth to reionization. We show that cosmological constraints are now strong enough that small differences in assumptions about reionization and neutrino mass give systematic differences which are clearly detectable in the data. We recommend that these updated results be used when comparing cosmological constraints from WMAP, ACT and SPT with other surveys or with current and future full-mission Planck cosmology. Cosmological parameter chains are publicly available on the NASAs LAMBDA data archive.
Non-minimal derivative coupling gravity in cosmology
NASA Astrophysics Data System (ADS)
Gumjudpai, Burin; Rangdee, Phongsaphat
2015-11-01
We give a brief review of the non-minimal derivative coupling (NMDC) scalar field theory in which there is non-minimal coupling between the scalar field derivative term and the Einstein tensor. We assume that the expansion is of power-law type or super-acceleration type for small redshift. The Lagrangian includes the NMDC term, a free kinetic term, a cosmological constant term and a barotropic matter term. For a value of the coupling constant that is compatible with inflation, we use the combined WMAP9 (WMAP9 + eCMB + BAO + H_0) dataset, the PLANCK + WP dataset, and the PLANCK TT, TE, EE + lowP + Lensing + ext datasets to find the value of the cosmological constant in the model. Modeling the expansion with power-law gives a negative cosmological constants while the phantom power-law (super-acceleration) expansion gives positive cosmological constant with large error bar. The value obtained is of the same order as in the Λ CDM model, since at late times the NMDC effect is tiny due to small curvature.
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Bayesian least squares deconvolution
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369
Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.
SEVEN-YEAR WILKINSON MICROWAVE ANISOTROPY PROBE (WMAP ) OBSERVATIONS: COSMOLOGICAL INTERPRETATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komatsu, E.; Smith, K. M.; Spergel, D. N.
2011-02-01
The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant (H{sub 0}) measurement, we determine the parameters of the simplest six-parameter {Lambda}CDM model. The power-law index of the primordial power spectrum is n{sub s} = 0.968 {+-} 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison-Zel'dovich-Peebles spectrum by 99.5% CL. The other parameters, includingmore » those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, {Sigma}M{sub {nu}} < 0.58 eV(95%CL), and the effective number of neutrino species, N{sub eff} = 4.34{sup +0.86}{sub -0.88} (68% CL), which benefit from better determinations of the third peak and H{sub 0}. The limit on a constant dark energy equation of state parameter from WMAP+BAO+H{sub 0}, without high-redshift Type Ia supernovae, is w = -1.10 {+-} 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Y{sub p} = 0.326 {+-} 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature-E-mode polarization cross power spectrum at 21{sigma}, compared with 13{sigma} from the five-year data. With the seven-year temperature-B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to {Delta}{alpha} = -1.{sup o}1 {+-} 1.{sup o}4(statistical) {+-} 1.{sup o}5(systematic) (68% CL). We report significant detections of the Sunyaev-Zel'dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5-0.7 times the predictions from 'universal profile' of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.« less
NASA Astrophysics Data System (ADS)
Singh, S. Surendra
2018-05-01
Considering the locally rotationally symmetric (LRS) Bianchi type-I metric with cosmological constant Λ, Einstein’s field equations are discussed based on the background of anisotropic fluid. We assumed the condition A = B 1 m for the metric potentials A and B, where m is a positive constant to obtain the viable model of the Universe. It is found that Λ(t) is positive and inversely proportional to time. The values of matter-energy density Ωm, dark energy density ΩΛ and deceleration parameter q are found to be consistent with the values of WMAP observations. State finder parameters and anisotropic deviation parameter are also investigated. It is also observed that the derived model is an accelerating, shearing and non-rotating Universe. Some of the asymptotic and geometrical behaviors of the derived models are investigated with the age of the Universe.
Integrated Sachs-Wolfe effect in massive bigravity
NASA Astrophysics Data System (ADS)
Enander, Jonas; Akrami, Yashar; Mörtsell, Edvard; Renneby, Malin; Solomon, Adam R.
2015-04-01
We study the integrated Sachs-Wolfe (ISW) effect in ghost-free, massive bigravity. We focus on the infinite-branch bigravity (IBB) model which exhibits viable cosmic expansion histories and stable linear perturbations, while the cosmological constant is set to zero and the late-time accelerated expansion of the Universe is due solely to the gravitational interaction terms. The ISW contribution to the CMB auto-correlation power spectrum is predicted, as well as the cross-correlation between the CMB temperature anisotropies and large-scale structure. We use ISW amplitudes as inferred from the WMAP 9-year temperature data together with galaxy and AGN data provided by the WISE mission in order to compare the theoretical predictions to the observations. The ISW amplitudes in IBB are found to be larger than the corresponding ones in the standard Λ CDM model by roughly a factor of 1.5, but are still consistent with the observations.
Bocquet, S.; Saro, A.; Mohr, J. J.; ...
2015-01-30
Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less
NASA Astrophysics Data System (ADS)
Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).
Practical differences among probabilities, possibilities, and credibilities
NASA Astrophysics Data System (ADS)
Grandin, Jean-Francois; Moulin, Caroline
2002-03-01
This paper presents some important differences that exist between theories, which allow the uncertainty management in data fusion. The main comparative results illustrated in this paper are the followings: Incompatibility between decisions got from probabilities and credibilities is highlighted. In the dynamic frame, as remarked in [19] or [17], belief and plausibility of Dempster-Shafer model do not frame the Bayesian probability. This framing can however be obtained by the Modified Dempster-Shafer approach. It also can be obtained in the Bayesian framework either by simulation techniques, or with a studentization. The uncommitted in the Dempster-Shafer way, e.g. the mass accorded to the ignorance, gives a mechanism similar to the reliability in the Bayesian model. Uncommitted mass in Dempster-Shafer theory or reliability in Bayes theory act like a filter that weakens extracted information, and improves robustness to outliners. So, it is logical to observe on examples like the one presented particularly by D.M. Buede, a faster convergence of a Bayesian method that doesn't take into account the reliability, in front of Dempster-Shafer method which uses uncommitted mass. But, on Bayesian masses, if reliability is taken into account, at the same level that the uncommited, e.g. F=1-m, we observe an equivalent rate for convergence. When Dempster-Shafer and Bayes operator are informed by uncertainty, faster or lower convergence can be exhibited on non Bayesian masses. This is due to positive or negative synergy between information delivered by sensors. This effect is a direct consequence of non additivity when considering non Bayesian masses. Unknowledge of the prior in bayesian techniques can be quickly compensated by information accumulated as time goes on by a set of sensors. All these results are presented on simple examples, and developed when necessary.
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Chaplygin gas inspired scalar fields inflation via well-known potentials
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Butt, Sadaf; Rani, Shamaila
2016-08-01
Brane inflationary universe models in the context of modified Chaplygin gas and generalized cosmic Chaplygin gas are being studied. We develop these models in view of standard scalar and tachyon fields. In both models, the implemented inflationary parameters such as scalar and tensor power spectra, scalar spectral index and tensor to scalar ratio are derived under slow roll approximations. We also use chaotic and exponential potential in high energy limits and discuss the characteristics of inflationary parameters for both potentials. These models are compatible with recent astronomical observations provided by WMAP7{+}9 and Planck data, i.e., ηs=1.027±0.051, 1.009±0.049, 0.096±0.025 and r<0.38, 0.36, 0.11.
Signals of leptophilic dark matter at the ILC
NASA Astrophysics Data System (ADS)
Dutta, Sukanta; Rawat, Bharti; Sachdeva, Divya
2017-09-01
Adopting a model independent approach, we constrain the various effective interactions of leptophilic DM particles with the visible world from the WMAP and Planck data. The thermally averaged indirect DM annihilation cross section and the DM-electron direct-detection cross section for such a DM candidate are observed to be consistent with the respective experimental data. We study the production of cosmologically allowed leptophilic DM in association with Z (Z→ f\\bar{f}), f≡ q, e^-, μ ^- at the ILC. We perform the χ ^2 analysis and compute the 99% C.L. acceptance contours in the m_χ and Λ plane from the two-dimensional differential distributions of various kinematic observables obtained after employing parton showering and hadronisation to the simulated data. We observe that the dominant hadronic channel provides the best kinematic reach of 2.62 TeV (m_χ = 25 GeV), which further improves to ˜ 3 TeV for polarised beams at √{s} = 1 TeV and an integrated luminosity of 1 ab^{-1}.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
NASA Astrophysics Data System (ADS)
Plant, N. G.; Thieler, E. R.; Gutierrez, B.; Lentz, E. E.; Zeigler, S. L.; Van Dongeren, A.; Fienen, M. N.
2016-12-01
We evaluate the strengths and weaknesses of Bayesian networks that have been used to address scientific and decision-support questions related to coastal geomorphology. We will provide an overview of coastal geomorphology research that has used Bayesian networks and describe what this approach can do and when it works (or fails to work). Over the past decade, Bayesian networks have been formulated to analyze the multi-variate structure and evolution of coastal morphology and associated human and ecological impacts. The approach relates observable system variables to each other by estimating discrete correlations. The resulting Bayesian-networks make predictions that propagate errors, conduct inference via Bayes rule, or both. In scientific applications, the model results are useful for hypothesis testing, using confidence estimates to gage the strength of tests while applications to coastal resource management are aimed at decision-support, where the probabilities of desired ecosystems outcomes are evaluated. The range of Bayesian-network applications to coastal morphology includes emulation of high-resolution wave transformation models to make oceanographic predictions, morphologic response to storms and/or sea-level rise, groundwater response to sea-level rise and morphologic variability, habitat suitability for endangered species, and assessment of monetary or human-life risk associated with storms. All of these examples are based on vast observational data sets, numerical model output, or both. We will discuss the progression of our experiments, which has included testing whether the Bayesian-network approach can be implemented and is appropriate for addressing basic and applied scientific problems and evaluating the hindcast and forecast skill of these implementations. We will present and discuss calibration/validation tests that are used to assess the robustness of Bayesian-network models and we will compare these results to tests of other models. This will demonstrate how Bayesian networks are used to extract new insights about coastal morphologic behavior, assess impacts to societal and ecological systems, and communicate probabilistic predictions to decision makers.
Bayesian Estimation and Inference Using Stochastic Electronics
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
An approach to quantifying the efficiency of a Bayesian filter
USDA-ARS?s Scientific Manuscript database
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...
Further investigation about inflation and reheating stages based on the Planck and WMAP-9
NASA Astrophysics Data System (ADS)
Ghayour, Basem
The potential V (ϕ) = λϕn is responsible for the inflation of the universe as scalar field ϕ oscillates quickly around some point where V (ϕ) has a minimum. The end of this stage has an important role on the further evolution stages of the universe. The created particles are responsible for reheating the universe at the end of this stage. The behavior of the inflation and reheating stages are often known as power law expansion S(η) ∝ η1+β, S(η) ∝ η1+βs, respectively. The reheating temperature (Trh) and βs give us valuable information about the reheating stage. Recently, people have studied about the behavior of Trh based on slow-roll inflation and initial condition of quantum normalization. It is shown that there is some discrepancy on Trh due to the amount of βs under the condition of slow-roll inflation and quantum normalization [M. Tong, Class. Quantum Grav. 30 (2013) 055013.]. Therefore, the author is believed in [M. Tong, Class. Quantum Grav. 30 (2013) 055013.] that the quantum normalization may not be a good initial condition but it seems that, we can remove this discrepancy by determining the appropriate parameter βs and hence the obtained temperatures based on the calculated βs are in favor of both mentioned conditions. Then from given βs, we can calculate Trh, tensor-to-scalar ratio r and parameters β,n based on the Planck and WMAP-9 data. The observed results of r,βs,β and n have consistency with their constrains. Also the results of Trh are in agreement with its general range and special range based on the DECIGO and BBO detectors.
Constraints on isocurvature models from the WMAP first-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moodley, K.; Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH; Bucher, M.
2004-11-15
We investigate the constraints imposed by the first-year Wilkinson Microwave Anisotropy Probe (WMAP) cosmic microwave background (CMB) data extended to higher multipoles by data from ACBAR, BOOMERANG, CBI, and the VSA and by the large-scale structure data from the 2dF galaxy redshift survey on the possible amplitude of primordial isocurvature modes. A flat universe with cold dark matter (CDM) and cosmological constant {lambda} is assumed, and the baryon, CDM isocurvature (CI), and neutrino density (NID), and velocity (NIV) isocurvature modes are considered. Constraints on the allowed isocurvature contributions are established from the data for various combinations of the adiabatic modemore » and one, two, and three isocurvature modes, with intermode cross correlations allowed. Since baryon and CDM isocurvature are observationally virtually indistinguishable, these modes are not considered separately. We find that when just a single isocurvature mode is added, the present data allows an isocurvature fraction, in terms of the nonadiabatic contribution to the power in the CMB anisotropy, as large as 13{+-}6, 7{+-}4, and 13{+-}7 percent for adiabatic plus the CI, NID, and NIV modes, respectively. When two isocurvature modes plus the adiabatic mode and cross correlations are allowed, these percentages rise to 47{+-}16, 34{+-}12, and 44{+-}12 for the combinations CI+NID, CI+NIV, and NID+NIV, respectively. Finally, when all three isocurvature modes and cross correlations are allowed, the admissible isocurvature fraction rises to 57{+-}9 percent. In our analysis we consider only scalar modes with a single common tilt parameter for all the modes and do not consider any possible primordial anisotropies in the local neutrino velocity distribution beyond quadrupole order. The sensitivity of the results to the choice of prior probability distribution is examined.« less
NASA Astrophysics Data System (ADS)
Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun
2017-06-01
We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.
NASA Astrophysics Data System (ADS)
Cacciato, Marcello; van den Bosch, Frank C.; More, Surhud; Mo, Houjun; Yang, Xiaohu
2013-04-01
We simultaneously constrain cosmology and galaxy bias using measurements of galaxy abundances, galaxy clustering and galaxy-galaxy lensing taken from the Sloan Digital Sky Survey. We use the conditional luminosity function (which describes the halo occupation statistics as a function of galaxy luminosity) combined with the halo model (which describes the non-linear matter field in terms of its halo building blocks) to describe the galaxy-dark matter connection. We explicitly account for residual redshift-space distortions in the projected galaxy-galaxy correlation functions, and marginalize over uncertainties in the scale dependence of the halo bias and the detailed structure of dark matter haloes. Under the assumption of a spatially flat, vanilla Λ cold dark matter (ΛCDM) cosmology, we focus on constraining the matter density, Ωm, and the normalization of the matter power spectrum, σ8, and we adopt 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) priors for the spectral index, n, the Hubble parameter, h, and the baryon density, Ωb. We obtain that Ωm = 0.278+ 0.023- 0.026 and σ8 = 0.763+ 0.064- 0.049 (95 per cent CL). These results are robust to uncertainties in the radial number density distribution of satellite galaxies, while allowing for non-Poisson satellite occupation distributions results in a slightly lower value for σ8 (0.744+ 0.056- 0.047). These constraints are in excellent agreement (at the 1σ level) with the cosmic microwave background constraints from WMAP. This demonstrates that the use of a realistic and accurate model for galaxy bias, down to the smallest non-linear scales currently observed in galaxy surveys, leads to results perfectly consistent with the vanilla ΛCDM cosmology.
Cosmic discordance: are Planck CMB and CFHTLenS weak lensing measurements out of tune?
MacCrann, Niall; Zuntz, Joe; Bridle, Sarah; ...
2015-06-17
We examine the level of agreement between low-redshift weak lensing data and the cosmic microwave background using measurements from the Canada–France–Hawaii Telescope Lensing Survey (CFHTLenS) and Planck+Wilkinson Microwave Anisotropy Probe (WMAP) polarization. We perform an independent analysis of the CFHTLenS six bin tomography results of Heymans et al. We extend their systematics treatment and find the cosmological constraints to be relatively robust to the choice of non-linear modelling, extension to the intrinsic alignment model and inclusion of baryons. We find that when marginalized in the Ωm–σ8 plane, the 95 percent confidence contours of CFHTLenS and Planck+WMAP only just touch, butmore » the discrepancy is less significant in the full six-dimensional parameter space of Λ cold dark matter (ΛCDM). Allowing a massive active neutrino or tensor modes does not significantly resolve the tension in the full n-dimensional parameter space. Our results differ from some in the literature because we use the full tomographic information in the weak lensing data and marginalize over systematics. We note that adding a sterile neutrino to ΛCDM brings the 2D marginalized contours into greater overlap, mainly due to the extra effective number of neutrino species, which we find to be 0.88 ± 0.43 (68 per cent) greater than standard on combining the data sets. We discuss why this is not a completely satisfactory resolution, leaving open the possibility of other new physics or observational systematics as contributing factors. We provide updated cosmology fitting functions for the CFHTLenS constraints and discuss the differences from ones used in the literature.« less
NASA Astrophysics Data System (ADS)
Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos
2016-10-01
Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.
Gu, Weidong; Medalla, Felicita; Hoekstra, Robert M
2018-02-01
The National Antimicrobial Resistance Monitoring System (NARMS) at the Centers for Disease Control and Prevention tracks resistance among Salmonella infections. The annual number of Salmonella isolates of a particular serotype from states may be small, making direct estimation of resistance proportions unreliable. We developed a Bayesian hierarchical model to improve estimation by borrowing strength from relevant sampling units. We illustrate the models with different specifications of spatio-temporal interaction using 2004-2013 NARMS data for ceftriaxone-resistant Salmonella serotype Heidelberg. Our results show that Bayesian estimates of resistance proportions were smoother than observed values, and the difference between predicted and observed proportions was inversely related to the number of submitted isolates. The model with interaction allowed for tracking of annual changes in resistance proportions at the state level. We demonstrated that Bayesian hierarchical models provide a useful tool to examine spatio-temporal patterns of small sample size such as those found in NARMS. Published by Elsevier Ltd.
Estimating the hatchery fraction of a natural population: a Bayesian approach
Barber, Jarrett J.; Gerow, Kenneth G.; Connolly, Patrick J.; Singh, Sarabdeep
2011-01-01
There is strong and growing interest in estimating the proportion of hatchery fish that are in a natural population (the hatchery fraction). In a sample of fish from the relevant population, some are observed to be marked, indicating their origin as hatchery fish. The observed proportion of marked fish is usually less than the actual hatchery fraction, since the observed proportion is determined by the proportion originally marked, differential survival (usually lower) of marked fish relative to unmarked hatchery fish, and rates of mark retention and detection. Bayesian methods can work well in a setting such as this, in which empirical data are limited but for which there may be considerable expert judgment regarding these values. We explored a Bayesian estimation of the hatchery fraction using Monte Carlo–Markov chain methods. Based on our findings, we created an interactive Excel tool to implement the algorithm, which we have made available for free.
Calibrating Bayesian Network Representations of Social-Behavioral Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Walsh, Stephen J.
2010-04-08
While human behavior has long been studied, recent and ongoing advances in computational modeling present opportunities for recasting research outcomes in human behavior. In this paper we describe how Bayesian networks can represent outcomes of human behavior research. We demonstrate a Bayesian network that represents political radicalization research – and show a corresponding visual representation of aspects of this research outcome. Since Bayesian networks can be quantitatively compared with external observations, the representation can also be used for empirical assessments of the research which the network summarizes. For a political radicalization model based on published research, we show this empiricalmore » comparison with data taken from the Minorities at Risk Organizational Behaviors database.« less
Cosmic bulk flow and the local motion from Cosmicflows-2
NASA Astrophysics Data System (ADS)
Hoffman, Yehuda; Courtois, Hélène M.; Tully, R. Brent
2015-06-01
Full sky surveys of peculiar velocity are arguably the best way to map the large-scale structure (LSS) out to distances of a few × 100 h-1 Mpc. Using the largest and most accurate ever catalogue of galaxy peculiar velocities Cosmicflows-2, the LSS has been reconstructed by means of the Wiener filter (WF) and constrained realizations (CRs) assuming as a Bayesian prior model the Λ cold dark matter model with the WMAP inferred cosmological parameters. This paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R = 500 h-1 Mpc. The estimated LSS, in general, and the bulk flow, in particular, are determined by the tension between the observational data and the assumed prior model. A pre-requisite for such an analysis is the requirement that the estimated bulk flow is consistent with the prior model. Such a consistency is found here. At R = 50 (150) h-1 Mpc, the estimated bulk velocity is 250 ± 21 (239 ± 38) km s-1. The corresponding cosmic variance at these radii is 126 (60) km s-1, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ≈ 200 h-1 Mpc, where the cosmic variance on the individual supergalactic Cartesian components (of the rms values) exceeds the variance of the CRs by at least a factor of 2. The SGX and SGY components of the cosmic microwave background dipole velocity are recovered by the WF velocity field down to a very few km s-1. The SGZ component of the estimated velocity, the one that is most affected by the zone of avoidance, is off by 126 km s-1 (an almost 2σ discrepancy). The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.
A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ
Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...
Community-based approaches to strategic environmental assessment: Lessons from Costa Rica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinclair, A. John; Sims, Laura; Spaling, Harry
This paper describes a community-based approach to strategic environmental assessment (SEA) using a case study of the Instituto Costarricense de Electricidad's (ICE) watershed management agricultural program (WMAP) in Costa Rica. The approach focused on four highly interactive workshops that used visioning, brainstorming and critical reflection exercises. Each workshop represented a critical step in the SEA process. Through this approach, communities in two rural watersheds assessed the environmental, social and economic impacts of a proposed second phase for WMAP. Lessons from this community-based approach to strategic environmental assessment include a recognition of participants learning what a participatory SEA is conceptually andmore » methodologically; the role of interactive techniques for identifying positive and negative impacts of the proposed program and generating creative mitigation strategies; the effect of workshops in reducing power differentials among program participants (proponent, communities, government agencies); and, the logistical importance of notice, timing and location for meaningful participation. The community-based approach to SEA offers considerable potential for assessing regional (watershed) development programs focused on sustainable resource-based livelihoods.« less
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenboim, Gabriela; /Valencia U.; Lykken, Joseph D.
2006-08-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard {Lambda}CDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction ofmore » the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP-allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to {Lambda}CDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.« less
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
NASA Astrophysics Data System (ADS)
Barenboim, Gabriela; Lykken, Joseph D.
2006-12-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard ΛCDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction of the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to ΛCDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.
McCarron, C Elizabeth; Pullenayegum, Eleanor M; Thabane, Lehana; Goeree, Ron; Tarride, Jean-Eric
2013-04-01
Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
Bayesian characterization of uncertainty in species interaction strengths.
Wolf, Christopher; Novak, Mark; Gitelman, Alix I
2017-06-01
Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
Spectral Bayesian Knowledge Tracing
ERIC Educational Resources Information Center
Falakmasir, Mohammad; Yudelson, Michael; Ritter, Steve; Koedinger, Ken
2015-01-01
Bayesian Knowledge Tracing (BKT) has been in wide use for modeling student skill acquisition in Intelligent Tutoring Systems (ITS). BKT tracks and updates student's latent mastery of a skill as a probability distribution of a binary variable. BKT does so by accounting for observed student successes in applying the skill correctly, where success is…
The Astrophysics Science Division Annual Report 2008
NASA Technical Reports Server (NTRS)
Oegerle, William; Reddy, Francis; Tyler, Pat
2009-01-01
The Astrophysics Science Division (ASD) at Goddard Space Flight Center (GSFC) is one of the largest and most diverse astrophysical organizations in the world, with activities spanning a broad range of topics in theory, observation, and mission and technology development. Scientific research is carried out over the entire electromagnetic spectrum from gamma rays to radio wavelengths as well as particle physics and gravitational radiation. Members of ASD also provide the scientific operations for three orbiting astrophysics missions WMAP, RXTE, and Swift, as well as the Science Support Center for the Fermi Gamma-ray Space Telescope. A number of key technologies for future missions are also under development in the Division, including X-ray mirrors, and new detectors operating at gamma-ray, X-ray, ultraviolet, infrared, and radio wavelengths. This report includes the Division's activities during 2008.
Cosmological constraints on neutrinos with Planck data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spinelli, M.
2015-07-15
Neutrinos take part in the dance of the evolving Universe influencing its history from leptogenesis, to Big Bang nucleosynthesis, until late time structure formation. This makes cosmology, and in particular one of its primary observables the Cosmic Microwave Background (CMB), an unusual but valuable tool for testing Neutrino Physics. The best measurement to date of full-sky CMB anisotropies comes from the Planck satellite launched in 2009 by the European Space Agency (ESA) and successful follower of COBE and WMAP. Testing Planck data against precise theoretical predictions allow us to shed light on various interesting open questions such as the valuemore » of the absolute scale of neutrino masses or their energy density. We revise here the results concerning neutrinos obtained by the Planck Collaboration in the 2013 data release.« less
Tribrid Inflation in Supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antusch, Stefan; Dutta, Koushik; Kostka, Philipp M.
2010-02-10
We propose a novel class of F-term hybrid inflation models in supergravity (SUGRA) where the eta-problem is resolved using either a Heisenberg symmetry or a shift symmetry of the Kaehler potential. In addition to the inflaton and the waterfall field, this class (referred to as tribrid inflation) contains a third 'driving' field which contributes the large vacuum energy during inflation by its F-term. In contrast to the 'standard' hybrid scenario, it has several attractive features due to the property of vanishing inflationary superpotential (W{sub inf} = 0) during inflation. While the symmetries of the Kaehler potential ensure a flat inflatonmore » potential at tree-level, quantum corrections induced by symmetry breaking terms in the superpotential generate a slope of the potential and lead to a spectral tilt consistent with recent WMAP observations.« less
Cosmological constraints on neutrinos with Planck data
NASA Astrophysics Data System (ADS)
Spinelli, M.
2015-07-01
Neutrinos take part in the dance of the evolving Universe influencing its history from leptogenesis, to Big Bang nucleosynthesis, until late time structure formation. This makes cosmology, and in particular one of its primary observables the Cosmic Microwave Background (CMB), an unusual but valuable tool for testing Neutrino Physics. The best measurement to date of full-sky CMB anisotropies comes from the Planck satellite launched in 2009 by the European Space Agency (ESA) and successful follower of COBE and WMAP. Testing Planck data against precise theoretical predictions allow us to shed light on various interesting open questions such as the value of the absolute scale of neutrino masses or their energy density. We revise here the results concerning neutrinos obtained by the Planck Collaboration in the 2013 data release.
Estudo de não gaussianidade nas anisotropias da RCF medidas Wmap
NASA Astrophysics Data System (ADS)
Andrade, A. P. A.; Wuensche, C. A.; Ribeiro, A. L. B.
2003-08-01
A investigação do campo de flutuações da Radiação Cósmica de Fundo (RCF) pode oferecer um importante teste para os modelos cosmológicos que descrevem a origem e a evolução das flutuações primordiais. De um lado, apresenta-se o modelo inflacionário que prevê um espectro de flutuações adiabáticas distribuídas segundo uma gaussiana e, de outro, os modelos de defeitos topológicos (dentre outros) que descrevem um mecanismo para a geração de flutuações de isocurvatura que obedecem a uma distribuição não gaussiana. Este trabalho tem como objetivo caracterizar traços do modelo não gaussiano de campo misto (entre flutuações adiabáticas e de isocurvatura) nos mapas do Wilkinson Microwave Anisotropy Probe (WMAP). Simulações das anisotropias da RCF no contexto de mistura indicam traços marcantes na distribuição das flutuações de temperatura, mesmo quando consideradas pequenas contribuições do campo de isocurvatura (da ordem de 0.001). O efeito da mistura entre os campos resulta na transferência de potência de flutuações em escalas angulares intermediárias para flutuações em pequenas escalas angulares. Este efeito pode ser caracterizado pela relação entre as amplitudes dos primeiros picos acústicos no espectro de potência da RCF. Neste trabalho, investigamos a contribuição do campo de isocurvatura, no contexto de mistura, sobre as observações recentes da RCF realizadas pelo WMAP. As previsões do modelo de campo misto, uma vez confrontadas com as observações em pequenas escalas angulares, podem ajudar a revelar a natureza das flutuações primordiais.
Primordial power spectrum from Planck
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in
2014-11-01
Using modified Richardson-Lucy algorithm we reconstruct the primordial power spectrum (PPS) from Planck Cosmic Microwave Background (CMB) temperature anisotropy data. In our analysis we use different combinations of angular power spectra from Planck to reconstruct the shape of the primordial power spectrum and locate possible features. Performing an extensive error analysis we found the dip near ℓ ∼ 750–850 represents the most prominent feature in the data. Feature near ℓ ∼ 1800–2000 is detectable with high confidence only in 217 GHz spectrum and is apparently consequence of a small systematic as described in the revised Planck 2013 papers. Fixing the background cosmological parameters andmore » the foreground nuisance parameters to their best fit baseline values, we report that the best fit power law primordial power spectrum is consistent with the reconstructed form of the PPS at 2σ C.L. of the estimated errors (apart from the local features mentioned above). As a consistency test, we found the reconstructed primordial power spectrum from Planck temperature data can also substantially improve the fit to WMAP-9 angular power spectrum data (with respect to power-law form of the PPS) allowing an overall amplitude shift of ∼ 2.5%. In this context low-ℓ and 100 GHz spectrum from Planck which have proper overlap in the multipole range with WMAP data found to be completely consistent with WMAP-9 (allowing amplitude shift). As another important result of our analysis we do report the evidence of gravitational lensing through the reconstruction analysis. Finally we present two smooth form of the PPS containing only the important features. These smooth forms of PPS can provide significant improvements in fitting the data (with respect to the power law PPS) and can be helpful to give hints for inflationary model building.« less
NASA Astrophysics Data System (ADS)
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Astronomers Find Enormous Hole in the Universe
NASA Astrophysics Data System (ADS)
2007-08-01
Astronomers have found an enormous hole in the Universe, nearly a billion light-years across, empty of both normal matter such as stars, galaxies, and gas, and the mysterious, unseen "dark matter." While earlier studies have shown holes, or voids, in the large-scale structure of the Universe, this new discovery dwarfs them all. Void Illustration Hole in Universe revealed by its effect on Cosmic Microwave Background radiation. CREDIT: Bill Saxton, NRAO/AUI/NSF, NASA Click on image for page of graphics and detailed information "Not only has no one ever found a void this big, but we never even expected to find one this size," said Lawrence Rudnick of the University of Minnesota. Rudnick, along with Shea Brown and Liliya R. Williams, also of the University of Minnesota, reported their findings in a paper accepted for publication in the Astrophysical Journal. Astronomers have known for years that, on large scales, the Universe has voids largely empty of matter. However, most of these voids are much smaller than the one found by Rudnick and his colleagues. In addition, the number of discovered voids decreases as the size increases. "What we've found is not normal, based on either observational studies or on computer simulations of the large-scale evolution of the Universe," Williams said. The astronomers drew their conclusion by studying data from the NRAO VLA Sky Survey (NVSS), a project that imaged the entire sky visible to the Very Large Array (VLA) radio telescope, part of the National Science Foundation's National Radio Astronomy Observatory (NRAO). Their careful study of the NVSS data showed a remarkable drop in the number of galaxies in a region of sky in the constellation Eridanus. "We already knew there was something different about this spot in the sky," Rudnick said. The region had been dubbed the "WMAP Cold Spot," because it stood out in a map of the Cosmic Microwave Background (CMB) radiation made by the Wilkinson Microwave Anisotopy Probe (WMAP) satellite, launched by NASA in 2001. The CMB, faint radio waves that are the remnant radiation from the Big Bang, is the earliest "baby picture" available of the Universe. Irregularities in the CMB show structures that existed only a few hundred thousand years after the Big Bang. The WMAP satellite measured temperature differences in the CMB that are only millionths of a degree. The cold region in Eridanus was discovered in 2004. Astronomers wondered if the cold spot was intrinsic to the CMB, and thus indicated some structure in the very early Universe, or whether it could be caused by something more nearby through which the CMB had to pass on its way to Earth. Finding the dearth of galaxies in that region by studying NVSS data resolved that question. "Although our surprising results need independent confirmation, the slightly colder temperature of the CMB in this region appears to be caused by a huge hole devoid of nearly all matter roughly 6-10 billion light-years from Earth," Rudnick said. How does a lack of matter cause a cooler temperature in the Big Bang's remnant radiation as seen from Earth? Photons of the CMB gain a small amount of energy when they pass through a region of space populated by matter. This effect is caused by the enigmatic "dark energy" that is accelerating the expansion of the Universe. This gain in photon energy makes the CMB appear slightly warmer in that direction. When the photons pass through an empty void, they lose a small amount of energy from this effect, and so the CMB radiation passing through such a region appears cooler. The acceleration of the Universe's expansion, and thus dark energy, were discovered less than a decade ago. The physical properties of dark energy are unknown, though it is by far the most abundant form of energy in the Universe today. Learning its nature is one of the most fundamental current problems in astrophysics. The NVSS imaged the roughly 82 percent of the sky visible from the New Mexico site of the VLA. The survey consists of 217,446 individual observations that consumed 2,940 hours of telescope time between 1993 and 1997. A set of 2,326 images was produced from the data, and these images are available via the NRAO Web site. The survey also produced a catalog of more than 1.8 million individual objects identifiable in the images. The NVSS has been cited in more than 1,200 scientific papers. NASA's WMAP satellite, using microwave amplifiers produced by NRAO's Central Development Laboratory, has yielded a wealth of new information about the age and history of the Universe, the emergence of the first stars, and the composition of the Universe. WMAP results have been extensively cited by scientists in a wide variety of astrophysical specialties. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. This research at the University of Minnesota is supported by individual investigator grants from the NSF and NASA.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian Inference in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2008-01-01
This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.
NASA Astrophysics Data System (ADS)
Berliner, M.
2017-12-01
Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
A Bayesian hierarchical approach to comparative audit for carotid surgery.
Kuhan, G; Marshall, E C; Abidia, A F; Chetter, I C; McCollum, P T
2002-12-01
the aim of this study was to illustrate how a Bayesian hierarchical modelling approach can aid the reliable comparison of outcome rates between surgeons. retrospective analysis of prospective and retrospective data. binary outcome data (death/stroke within 30 days), together with information on 15 possible risk factors specific for CEA were available on 836 CEAs performed by four vascular surgeons from 1992-99. The median patient age was 68 (range 38-86) years and 60% were men. the model was developed using the WinBUGS software. After adjusting for patient-level risk factors, a cross-validatory approach was adopted to identify "divergent" performance. A ranking exercise was also carried out. the overall observed 30-day stroke/death rate was 3.9% (33/836). The model found diabetes, stroke and heart disease to be significant risk factors. There was no significant difference between the predicted and observed outcome rates for any surgeon (Bayesian p -value>0.05). Each surgeon had a median rank of 3 with associated 95% CI 1.0-5.0, despite the variability of observed stroke/death rate from 2.9-4.4%. After risk adjustment, there was very little residual between-surgeon variability in outcome rate. Bayesian hierarchical models can help to accurately quantify the uncertainty associated with surgeons' performance and rank.
Bayesian Network Meta-Analysis for Unordered Categorical Outcomes with Incomplete Data
ERIC Educational Resources Information Center
Schmid, Christopher H.; Trikalinos, Thomas A.; Olkin, Ingram
2014-01-01
We develop a Bayesian multinomial network meta-analysis model for unordered (nominal) categorical outcomes that allows for partially observed data in which exact event counts may not be known for each category. This model properly accounts for correlations of counts in mutually exclusive categories and enables proper comparison and ranking of…
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
NASA Technical Reports Server (NTRS)
Bennett, C. L.; Halpern, M.; Hinshaw, G.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Page, L.; Spergel, D. N.; Tucker, G. S.
2003-01-01
We present full sky microwave maps in five frequency bands (23 to 94 GHz) from the WMAP first year sky survey. Calibration errors are less than 0.5% and the low systematic error level is well specified. The cosmic microwave background (CMB) is separated from the foregrounds using multifrequency data. The sky maps are consistent with the 7 in. full-width at half-maximum (FWHM) Cosmic Background Explorer (COBE) maps. We report more precise, but consistent, dipole and quadrupole values. The CMB anisotropy obeys Gaussian statistics with -58 less than f(sub NL) less than 134 (95% CL). The 2 less than or = l less than or = 900 anisotropy power spectrum is cosmic variance limited for l less than 354 with a signal-to-noise ratio greater than 1 per mode to l = 658. The temperature-polarization cross-power spectrum reveals both acoustic features and a large angle correlation from reionization. The optical depth of reionization is tau = 0.17 +/- 0.04, which implies a reionization epoch of t(sub r) = 180(sup +220, sub -80) Myr (95% CL) after the Big Bang at a redshift of z(sub r) = 20(sup +10, sub -9) (95% CL) for a range of ionization scenarios. This early reionization is incompatible with the presence of a significant warm dark matter density. A best-fit cosmological model to the CMB and other measures of large scale structure works remarkably well with only a few parameters. The age of the best-fit universe is t(sub 0) = 13.7 +/- 0.2 Gyr old. Decoupling was t(sub dec) = 379(sup +8, sub -7)kyr after the Big Bang at a redshift of z(sub dec) = 1089 +/- 1. The thickness of the decoupling surface was Delta(sub z(sub dec)) = 195 +/- 2. The matter density of the universe is Omega(sub m)h(sup 2) = 0.135(sup +0.008, sub -0.009) the baryon density is Omega(sub b)h(sup 2) = 0.0224 +/- 0.0009, and the total mass-energy of the universe is Omega(sub tot) = 1.02 +/- 0.02. There is progressively less fluctuation power on smaller scales, from WMAP to fine scale CMB measurements to galaxies and finally to the Ly-alpha forest. This is accounted for with a running spectral index, significant at the approx. 2(sigma) level. The spectral index of scalar fluctuations is fit as n(sub s) = 0.93 +/-0.03 at wavenumber k(sub o) = 0.05/Mpc ((sub eff) approx. = 700), with a slope of dn(sub s)/d I(sub nk) = -0.031(sup + 0.016, sub -0.018) in the best-fit model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocquet, S.; Saro, A.; Mohr, J. J.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg{sup 2} of the survey along with 63 velocity dispersion (σ {sub v}) and 16 X-ray Y {sub X} measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ {sub v} and Y {sub X} are consistent at the 0.6σ level, with the σ {sub v} calibration preferring ∼16% highermore » masses. We use the full SPT{sub CL} data set (SZ clusters+σ {sub v}+Y {sub X}) to measure σ{sub 8}(Ω{sub m}/0.27){sup 0.3} = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m {sub ν} = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m {sub ν} further reconciles the results. When we combine the SPT{sub CL} and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y {sub X} calibration and 0.8σ higher than the σ {sub v} calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω{sub m} = 0.299 ± 0.009 and σ{sub 8} = 0.829 ± 0.011. Within a νCDM model we find ∑m {sub ν} = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1)« less
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Planck intermediate results. LII. Planet flux densities
NASA Astrophysics Data System (ADS)
Planck Collaboration; Akrami, Y.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Chiang, H. C.; Colombo, L. P. L.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; González-Nuevo, J.; Górski, K. M.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kim, J.; Kisner, T. S.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Lellouch, E.; Levrier, F.; Liguori, M.; Lilje, P. B.; Lindholm, V.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Natoli, P.; Oxborrow, C. A.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Perdereau, O.; Piacentini, F.; Plaszczynski, S.; Polenta, G.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Romelli, E.; Rosset, C.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirri, G.; Spencer, L. D.; Suur-Uski, A.-S.; Tauber, J. A.; Tavagnacco, D.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wehus, I. K.; Zacchei, A.
2017-11-01
Measurements of flux density are described for five planets, Mars, Jupiter, Saturn, Uranus, and Neptune, across the six Planck High Frequency Instrument frequency bands (100-857 GHz) and these are then compared with models and existing data. In our analysis, we have also included estimates of the brightness of Jupiter and Saturn at the three frequencies of the Planck Low Frequency Instrument (30, 44, and 70 GHz). The results provide constraints on the intrinsic brightness and the brightness time-variability of these planets. The majority of the planet flux density estimates are limited by systematic errors, but still yield better than 1% measurements in many cases. Applying data from Planck HFI, the Wilkinson Microwave Anisotropy Probe (WMAP), and the Atacama Cosmology Telescope (ACT) to a model that incorporates contributions from Saturn's rings to the planet's total flux density suggests a best fit value for the spectral index of Saturn's ring system of βring = 2.30 ± 0.03 over the 30-1000 GHz frequency range. Estimates of the polarization amplitude of the planets have also been made in the four bands that have polarization-sensitive detectors (100-353 GHz); this analysis provides a 95% confidence level upper limit on Mars's polarization of 1.8, 1.7, 1.2, and 1.7% at 100, 143, 217, and 353 GHz, respectively. The average ratio between the Planck-HFI measurements and the adopted model predictions for all five planets (excluding Jupiter observations for 353 GHz) is 1.004, 1.002, 1.021, and 1.033 for 100, 143, 217, and 353 GHz, respectively. Model predictions for planet thermodynamic temperatures are therefore consistent with the absolute calibration of Planck-HFI detectors at about the three-percent level. We compare our measurements with published results from recent cosmic microwave background experiments. In particular, we observe that the flux densities measured by Planck HFI and WMAP agree to within 2%. These results allow experiments operating in the mm-wavelength range to cross-calibrate against Planck and improve models of radiative transport used in planetary science.
Eser, Alexander; Primas, Christian; Reinisch, Sieglinde; Vogelsang, Harald; Novacek, Gottfried; Mould, Diane R; Reinisch, Walter
2018-01-30
Despite a robust exposure-response relationship of infliximab in inflammatory bowel disease (IBD), attempts to adjust dosing to individually predicted serum concentrations of infliximab (SICs) are lacking. Compared with labor-intensive conventional software for pharmacokinetic (PK) modeling (eg, NONMEM) dashboards are easy-to-use programs incorporating complex Bayesian statistics to determine individual pharmacokinetics. We evaluated various infliximab detection assays and the number of samples needed to precisely forecast individual SICs using a Bayesian dashboard. We assessed long-term infliximab retention in patients being dosed concordantly versus discordantly with Bayesian dashboard recommendations. Three hundred eighty-two serum samples from 117 adult IBD patients on infliximab maintenance therapy were analyzed by 3 commercially available assays. Data from each assay was modeled using NONMEM and a Bayesian dashboard. PK parameter precision and residual variability were assessed. Forecast concentrations from both systems were compared with observed concentrations. Infliximab retention was assessed by prediction for dose intensification via Bayesian dashboard versus real-life practice. Forecast precision of SICs varied between detection assays. At least 3 SICs from a reliable assay are needed for an accurate forecast. The Bayesian dashboard performed similarly to NONMEM to predict SICs. Patients dosed concordantly with Bayesian dashboard recommendations had a significantly longer median drug survival than those dosed discordantly (51.5 versus 4.6 months, P < .0001). The Bayesian dashboard helps to assess the diagnostic performance of infliximab detection assays. Three, not single, SICs provide sufficient information for individualized dose adjustment when incorporated into the Bayesian dashboard. Treatment adjusted to forecasted SICs is associated with longer drug retention of infliximab. © 2018, The American College of Clinical Pharmacology.
Online Variational Bayesian Filtering-Based Mobile Target Tracking in Wireless Sensor Networks
Zhou, Bingpeng; Chen, Qingchun; Li, Tiffany Jing; Xiao, Pei
2014-01-01
The received signal strength (RSS)-based online tracking for a mobile node in wireless sensor networks (WSNs) is investigated in this paper. Firstly, a multi-layer dynamic Bayesian network (MDBN) is introduced to characterize the target mobility with either directional or undirected movement. In particular, it is proposed to employ the Wishart distribution to approximate the time-varying RSS measurement precision's randomness due to the target movement. It is shown that the proposed MDBN offers a more general analysis model via incorporating the underlying statistical information of both the target movement and observations, which can be utilized to improve the online tracking capability by exploiting the Bayesian statistics. Secondly, based on the MDBN model, a mean-field variational Bayesian filtering (VBF) algorithm is developed to realize the online tracking of a mobile target in the presence of nonlinear observations and time-varying RSS precision, wherein the traditional Bayesian filtering scheme cannot be directly employed. Thirdly, a joint optimization between the real-time velocity and its prior expectation is proposed to enable online velocity tracking in the proposed online tacking scheme. Finally, the associated Bayesian Cramer–Rao Lower Bound (BCRLB) analysis and numerical simulations are conducted. Our analysis unveils that, by exploiting the potential state information via the general MDBN model, the proposed VBF algorithm provides a promising solution to the online tracking of a mobile node in WSNs. In addition, it is shown that the final tracking accuracy linearly scales with its expectation when the RSS measurement precision is time-varying. PMID:25393784
Bayesian Forecasting Tool to Predict the Need for Antidote in Acute Acetaminophen Overdose.
Desrochers, Julie; Wojciechowski, Jessica; Klein-Schwartz, Wendy; Gobburu, Jogarao V S; Gopalakrishnan, Mathangi
2017-08-01
Acetaminophen (APAP) overdose is the leading cause of acute liver injury in the United States. Patients with elevated plasma acetaminophen concentrations (PACs) require hepatoprotective treatment with N-acetylcysteine (NAC). These patients have been primarily risk-stratified using the Rumack-Matthew nomogram. Previous studies of acute APAP overdoses found that the nomogram failed to accurately predict the need for the antidote. The objectives of this study were to develop a population pharmacokinetic (PK) model for APAP following acute overdose and evaluate the utility of population PK model-based Bayesian forecasting in NAC administration decisions. Limited APAP concentrations from a retrospective cohort of acute overdosed subjects from the Maryland Poison Center were used to develop the population PK model and to investigate the effect of type of APAP products and other prognostic factors. The externally validated population PK model was used a prior for Bayesian forecasting to predict the individual PK profile when one or two observed PACs were available. The utility of Bayesian forecasted APAP concentration-time profiles inferred from one (first) or two (first and second) PAC observations were also tested in their ability to predict the observed NAC decisions. A one-compartment model with first-order absorption and elimination adequately described the data with single activated charcoal and APAP products as significant covariates on absorption and bioavailability. The Bayesian forecasted individual concentration-time profiles had acceptable bias (6.2% and 9.8%) and accuracy (40.5% and 41.9%) when either one or two PACs were considered, respectively. The sensitivity and negative predictive value of the Bayesian forecasted NAC decisions using one PAC were 84% and 92.6%, respectively. The population PK analysis provided a platform for acceptably predicting an individual's concentration-time profile following acute APAP overdose with at least one PAC, and the individual's covariate profile, and can potentially be used for making early NAC administration decisions. © 2017 Pharmacotherapy Publications, Inc.
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R
2016-09-20
Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.
Calculation of primordial abundances of light nuclei including a heavy sterile neutrino
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosquera, M.E.; Civitarese, O., E-mail: mmosquera@fcaglp.unlp.edu.ar, E-mail: osvaldo.civitarese@fisica.unlp.edu.ar
2015-08-01
We include the coupling of a heavy sterile neutrino with active neutrinos in the calculation of primordial abundances of light-nuclei. We calculate neutrino distribution functions and primordial abundances, as functions depending on a renormalization of the sterile neutrino distribution function (a), the sterile neutrino mass (m{sub s}) and the mixing angle (φ). Using the observable data, we set constrains on these parameters, which have the values 0a < 0.4, sin{sup 2} φ ≈ 0.12−0.39 and 0m{sub s} < 7 keV at 1σ level, for a fixed value of the baryon to photon ratio. When the baryon to photon ratio is allowed to vary, its extracted value ismore » in agreement with the values constrained by Planck observations and by the Wilkinson Microwave Anisotropy Probe (WMAP). It is found that the anomaly in the abundance of {sup 7}Li persists, in spite of the inclusion of a heavy sterile neutrino.« less
NASA Astrophysics Data System (ADS)
Mukherjee, Suvodip; Souradeep, Tarun
2016-06-01
Recent measurements of the temperature field of the cosmic microwave background (CMB) provide tantalizing evidence for violation of statistical isotropy (SI) that constitutes a fundamental tenet of contemporary cosmology. CMB space based missions, WMAP, and Planck have observed a 7% departure in the SI temperature field at large angular scales. However, due to higher cosmic variance at low multipoles, the significance of this measurement is not expected to improve from any future CMB temperature measurements. We demonstrate that weak lensing of the CMB due to scalar perturbations produces a corresponding SI violation in B modes of CMB polarization at smaller angular scales. The measurability of this phenomenon depends upon the scales (l range) over which power asymmetry is present. Power asymmetry, which is restricted only to l <64 in the temperature field, cannot lead to any significant observable effect from this new window. However, this effect can put an independent bound on the spatial range of scales of hemispherical asymmetry present in the scalar sector.
Mukherjee, Suvodip; Souradeep, Tarun
2016-06-03
Recent measurements of the temperature field of the cosmic microwave background (CMB) provide tantalizing evidence for violation of statistical isotropy (SI) that constitutes a fundamental tenet of contemporary cosmology. CMB space based missions, WMAP, and Planck have observed a 7% departure in the SI temperature field at large angular scales. However, due to higher cosmic variance at low multipoles, the significance of this measurement is not expected to improve from any future CMB temperature measurements. We demonstrate that weak lensing of the CMB due to scalar perturbations produces a corresponding SI violation in B modes of CMB polarization at smaller angular scales. The measurability of this phenomenon depends upon the scales (l range) over which power asymmetry is present. Power asymmetry, which is restricted only to l<64 in the temperature field, cannot lead to any significant observable effect from this new window. However, this effect can put an independent bound on the spatial range of scales of hemispherical asymmetry present in the scalar sector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinney, William H., E-mail: whkinney@buffalo.edu
We consider observational limits on a proposed model of the string landscape in inflation. In this scenario, effects from the decoherence of entangled quantum states in long-wavelength modes in the universe result in modifications to the Friedmann Equation and a corresponding modification to inflationary dynamics. Previous work [1, 2] suggested that such effects could provide an explanation for well-known anomalies in the Cosmic Microwave Background (CMB), such as the lack of power on large scales and the ''cold spot'' seen by both the WMAP and Planck satellites. In this paper, we compute limits on these entanglement effects from the Planckmore » CMB data combined with the BICEP/Keck polarization measurement, and find no evidence for observable modulations to the power spectrum from landscape entanglement, and no sourcing of observable CMB anomalies. The originally proposed model with an exponential potential is ruled out to high significance. Assuming a Starobinsky-type R {sup 2} inflation model, which is consistent with CMB constraints, data place a 2σ lower bound of b > 6.46 × 10{sup 7} GeV on the Supersymmetry breaking scale associated with entanglement corrections.« less
The influence of super-horizon scales on cosmological observables generated during inflation
NASA Astrophysics Data System (ADS)
Matarrese, Sabino; Musso, Marcello A.; Riotto, Antonio
2004-05-01
Using the techniques of out-of-equilibrium field theory, we study the influence on properties of cosmological perturbations generated during inflation on observable scales coming from fluctuations corresponding today to scales much bigger than the present Hubble radius. We write the effective action for the coarse grained inflaton perturbations, integrating out the sub-horizon modes, which manifest themselves as a coloured noise and lead to memory effects. Using the simple model of a scalar field with cubic self-interactions evolving in a fixed de Sitter background, we evaluate the two- and three-point correlation function on observable scales. Our basic procedure shows that perturbations do preserve some memory of the super-horizon scale dynamics, in the form of scale dependent imprints in the statistical moments. In particular, we find a blue tilt of the power spectrum on large scales, in agreement with the recent results of the WMAP collaboration which show a suppression of the lower multipoles in the cosmic microwave background anisotropies, and a substantial enhancement of the intrinsic non-Gaussianity on large scales.
Probing interaction and spatial curvature in the holographic dark energy model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-12-01
In this paper we place observational constraints on the interaction and spatial curvature in the holographic dark energy model. We consider three kinds of phenomenological interactions between holographic dark energy and matter, i.e., the interaction term Q is proportional to the energy densities of dark energy (ρ{sub Λ}), matter (ρ{sub m}), and matter plus dark energy (ρ{sub m}+ρ{sub Λ}). For probing the interaction and spatial curvature in the holographic dark energy model, we use the latest observational data including the type Ia supernovae (SNIa) Constitution data, the shift parameter of the cosmic microwave background (CMB) given by the five-year Wilkinsonmore » Microwave Anisotropy Probe (WMAP5) observations, and the baryon acoustic oscillation (BAO) measurement from the Sloan Digital Sky Survey (SDSS). Our results show that the interaction and spatial curvature in the holographic dark energy model are both rather small. Besides, it is interesting to find that there exists significant degeneracy between the phenomenological interaction and the spatial curvature in the holographic dark energy model.« less
The Cosmic Abundance of 3He: Green Bank Telescope Observations
NASA Astrophysics Data System (ADS)
Balser, Dana; Bania, Thomas
2018-01-01
The Big Bang theory for the origin of the Universe predicts that during the first ~1,000 seconds significant amounts of the light elements (2H, 3He, 4He, and 7Li) were produced. Many generations of stellar evolution in the Galaxy modifies these primordial abundances. Observations of the 3He+ hyperfine transition in Galactic HII regions reveals a 3He/H abundance ratio that is constant with Galactocentric radius to within the uncertainties, and is consistent with the primordial value as determined from cosmic microwave background experiments (e.g., WMAP). This "3He Plateau" indicates that the net production and destruction of 3He in stars is approximately zero. Recent stellar evolution models that include thermohaline mixing, however, predict that 3He/H abundance ratios should slightly decrease with Galactocentric radius, or in places in the Galaxy with lower star formation rates. Here we discuss sensitive Green Bank Telescope (GBT) observations of 3He+ at 3.46 cm in a subset of our HII region sample. We develop HII region models and derive accurate 3He/H abundance ratios to better constrain these new stellar evolution models.
Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W
2016-09-01
Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Exoplanet Biosignatures: Future Directions
Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y.; Lenardic, Adrian; Reinhard, Christopher T.; Moore, William; Schwieterman, Edward W.; Shkolnik, Evgenya L.; Smith, Harrison B.
2018-01-01
Abstract We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets—Biosignatures—Life detection—Bayesian analysis. Astrobiology 18, 779–824. PMID:29938538
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Population forecasts for Bangladesh, using a Bayesian methodology.
Mahsin, Md; Hossain, Syed Shahadat
2012-12-01
Population projection for many developing countries could be quite a challenging task for the demographers mostly due to lack of availability of enough reliable data. The objective of this paper is to present an overview of the existing methods for population forecasting and to propose an alternative based on the Bayesian statistics, combining the formality of inference. The analysis has been made using Markov Chain Monte Carlo (MCMC) technique for Bayesian methodology available with the software WinBUGS. Convergence diagnostic techniques available with the WinBUGS software have been applied to ensure the convergence of the chains necessary for the implementation of MCMC. The Bayesian approach allows for the use of observed data and expert judgements by means of appropriate priors, and a more realistic population forecasts, along with associated uncertainty, has been possible.
Exoplanet Biosignatures: Future Directions.
Walker, Sara I; Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y; Lenardic, Adrian; Reinhard, Christopher T; Moore, William; Schwieterman, Edward W; Shkolnik, Evgenya L; Smith, Harrison B
2018-06-01
We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets-Biosignatures-Life detection-Bayesian analysis. Astrobiology 18, 779-824.
Cosmic Bulk Flow and the Local Motion from Cosmicflows-2
NASA Astrophysics Data System (ADS)
Courtois, Helene M.; Hoffman, Yehuda; Tully, R. Brent
2015-08-01
Full sky surveys of peculiar velocity are arguably the best way to map the large scale structure out to distances of a few times 100 Mpc/h.Using the largest and most accurate ever catalog of galaxy peculiar velocities Cosmicflows-2, the large scale structure has been reconstructed by means of the Wiener filter and constrained realizations assuming as a Bayesian prior model the LCDM standard model of cosmology. The present paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R=500 Mpc/h. Our main results is that the estimated bulk flow is consistent with the LCDM model with the WMAP inferred cosmological parameters. At R=50 (150)Mpc/h the estimated bulk velocity is 250 +/- 21 (239 +/- 38) km/s. The corresponding cosmic variance at these radii is 126 (60) km/s, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ˜200 Mpc/h, where the cosmic variance on the individual Supergalactic Cartesian components (of the r.m.s. values) exceeds the variance of the constrined realizations by at least a factor of 2. The SGX and SGY components of the CMB dipole velocity are recovered by the Wiener Filter velocity field down to a very few km/s. The SGZ component of the estimated velocity, the one that is most affected by the Zone of Avoidance, is off by 126km/s (an almost 2 sigma discrepancy).The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.
Exoplanet Biosignatures: A Framework for Their Assessment.
Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn
2018-04-20
Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.
Bayesian statistics in radionuclide metrology: measurement of a decaying source
NASA Astrophysics Data System (ADS)
Bochud, François O.; Bailat, Claude J.; Laedermann, Jean-Pascal
2007-08-01
The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation.
Finite‐fault Bayesian inversion of teleseismic body waves
Clayton, Brandon; Hartzell, Stephen; Moschetti, Morgan P.; Minson, Sarah E.
2017-01-01
Inverting geophysical data has provided fundamental information about the behavior of earthquake rupture. However, inferring kinematic source model parameters for finite‐fault ruptures is an intrinsically underdetermined problem (the problem of nonuniqueness), because we are restricted to finite noisy observations. Although many studies use least‐squares techniques to make the finite‐fault problem tractable, these methods generally lack the ability to apply non‐Gaussian error analysis and the imposition of nonlinear constraints. However, the Bayesian approach can be employed to find a Gaussian or non‐Gaussian distribution of all probable model parameters, while utilizing nonlinear constraints. We present case studies to quantify the resolving power and associated uncertainties using only teleseismic body waves in a Bayesian framework to infer the slip history for a synthetic case and two earthquakes: the 2011 Mw 7.1 Van, east Turkey, earthquake and the 2010 Mw 7.2 El Mayor–Cucapah, Baja California, earthquake. In implementing the Bayesian method, we further present two distinct solutions to investigate the uncertainties by performing the inversion with and without velocity structure perturbations. We find that the posterior ensemble becomes broader when including velocity structure variability and introduces a spatial smearing of slip. Using the Bayesian framework solely on teleseismic body waves, we find rake is poorly constrained by the observations and rise time is poorly resolved when slip amplitude is low.
Bayesian inference of a historical bottleneck in a heavily exploited marine mammal.
Hoffman, J I; Grant, S M; Forcada, J; Phillips, C D
2011-10-01
Emerging Bayesian analytical approaches offer increasingly sophisticated means of reconstructing historical population dynamics from genetic data, but have been little applied to scenarios involving demographic bottlenecks. Consequently, we analysed a large mitochondrial and microsatellite dataset from the Antarctic fur seal Arctocephalus gazella, a species subjected to one of the most extreme examples of uncontrolled exploitation in history when it was reduced to the brink of extinction by the sealing industry during the late eighteenth and nineteenth centuries. Classical bottleneck tests, which exploit the fact that rare alleles are rapidly lost during demographic reduction, yielded ambiguous results. In contrast, a strong signal of recent demographic decline was detected using both Bayesian skyline plots and Approximate Bayesian Computation, the latter also allowing derivation of posterior parameter estimates that were remarkably consistent with historical observations. This was achieved using only contemporary samples, further emphasizing the potential of Bayesian approaches to address important problems in conservation and evolutionary biology. © 2011 Blackwell Publishing Ltd.
Bayesian conditional-independence modeling of the AIDS epidemic in England and Wales
NASA Astrophysics Data System (ADS)
Gilks, Walter R.; De Angelis, Daniela; Day, Nicholas E.
We describe the use of conditional-independence modeling, Bayesian inference and Markov chain Monte Carlo, to model and project the HIV-AIDS epidemic in homosexual/bisexual males in England and Wales. Complexity in this analysis arises through selectively missing data, indirectly observed underlying processes, and measurement error. Our emphasis is on presentation and discussion of the concepts, not on the technicalities of this analysis, which can be found elsewhere [D. De Angelis, W.R. Gilks, N.E. Day, Bayesian projection of the the acquired immune deficiency syndrome epidemic (with discussion), Applied Statistics, in press].
Λ CDM is Consistent with SPARC Radial Acceleration Relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, B. W.; Wadsley, J. W., E-mail: kellerbw@mcmaster.ca
2017-01-20
Recent analysis of the Spitzer Photometry and Accurate Rotation Curve (SPARC) galaxy sample found a surprisingly tight relation between the radial acceleration inferred from the rotation curves and the acceleration due to the baryonic components of the disk. It has been suggested that this relation may be evidence for new physics, beyond Λ CDM . In this Letter, we show that 32 galaxies from the MUGS2 match the SPARC acceleration relation. These cosmological simulations of star-forming, rotationally supported disks were simulated with a WMAP3 Λ CDM cosmology, and match the SPARC acceleration relation with less scatter than the observational data.more » These results show that this acceleration relation is a consequence of dissipative collapse of baryons, rather than being evidence for exotic dark-sector physics or new dynamical laws.« less
Relic galaxies: where are they?
NASA Astrophysics Data System (ADS)
Peralta de Arriba, L.; Quilis, V.; Trujillo, I.; Cebrián, M.; Balcells, M.
2017-03-01
The finding that massive galaxies grow with cosmic time fired the starting gun for the search of objects which could have survived up to the present day without suffering substantial changes (neither in their structures, neither in their stellar populations). Nevertheless, and despite the community efforts, up to now only one firm candidate to be considered one of these relics is known: NGC 1277. Curiously, this galaxy is located at the centre of one of the most rich near galaxy clusters: Perseus. Is its location a matter of chance? Should relic hunters focus their search on galaxy clusters? In order to reply this question, we have performed a simultaneous and analogous analysis using simulations (Millennium I-WMAP7) and observations (New York University Value-Added Galaxy Catalogue). Our results in both frameworks agree: it is more probable to find relics in high density environments.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Johnson, Eric D; Tubau, Elisabet
2017-06-01
Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert
2016-01-01
State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718
Impact assessment of extreme storm events using a Bayesian network
den Heijer, C.(Kees); Knipping, Dirk T.J.A.; Plant, Nathaniel G.; van Thiel de Vries, Jaap S. M.; Baart, Fedor; van Gelder, Pieter H. A. J. M.
2012-01-01
This paper describes an investigation on the usefulness of Bayesian Networks in the safety assessment of dune coasts. A network has been created that predicts the erosion volume based on hydraulic boundary conditions and a number of cross-shore profile indicators. Field measurement data along a large part of the Dutch coast has been used to train the network. Corresponding storm impact on the dunes was calculated with an empirical dune erosion model named duros+. Comparison between the Bayesian Network predictions and the original duros+ results, here considered as observations, results in a skill up to 0.88, provided that the training data covers the range of predictions. Hence, the predictions from a deterministic model (duros+) can be captured in a probabilistic model (Bayesian Network) such that both the process knowledge and uncertainties can be included in impact and vulnerability assessments.
Virtual Representation of IID Observations in Bayesian Belief Networks
1994-04-01
programs for structuring and using Bayesian inference include ERGO ( Noetic Systems, Inc., 1991) and HUGIN (Andersen, Jensen, Olesen, & Jensen, 1989...Nichols, S.. Chipman, & R. Brennan (Eds.), Cognitively diagnostic assessment. Hillsdale, NJ: Erlbaum. Noetic Systems, Inc. (1991). ERGO [computer...Dr Geore Eageiard Jr Chicago IL 60612 US Naval Academy Division of Educational Studies Annapolis MD 21402-5002 Emory University Dr Janice Gifford 210
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
NASA Astrophysics Data System (ADS)
Olson, R.; An, S. I.
2016-12-01
Atlantic Meridional Overturning Circulation (AMOC) in the ocean might slow down in the future, which can lead to a host of climatic effects in North Atlantic and throughout the world. Despite improvements in climate models and availability of new observations, AMOC projections remain uncertain. Here we constrain CMIP5 multi-model ensemble output with observations of a recently developed AMOC index to provide improved Bayesian predictions of future AMOC. Specifically, we first calculate yearly AMOC index loosely based on Rahmstorf et al. (2015) for years 1880—2004 for both observations, and the CMIP5 models for which relevant output is available. We then assign a weight to each model based on a Bayesian Model Averaging method that accounts for differential model skill in terms of both mean state and variability. We include the temporal autocorrelation in climate model errors, and account for the uncertainty in the parameters of our statistical model. We use the weights to provide future weighted projections of AMOC, and compare them to un-weighted ones. Our projections use bootstrapping to account for uncertainty in internal AMOC variability. We also perform spectral and other statistical analyses to show that AMOC index variability, both in models and in observations, is consistent with red noise. Our results improve on and complement previous work by using a new ensemble of climate models, a different observational metric, and an improved Bayesian weighting method that accounts for differential model skill at reproducing internal variability. Reference: Rahmstorf, S., Box, J. E., Feulner, G., Mann, M. E., Robinson, A., Rutherford, S., & Schaffernicht, E. J. (2015). Exceptional twentieth-century slowdown in atlantic ocean overturning circulation. Nature Climate Change, 5(5), 475-480. doi:10.1038/nclimate2554
Unraveling multiple changes in complex climate time series using Bayesian inference
NASA Astrophysics Data System (ADS)
Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias
2016-04-01
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.
Kennedy, Paula L; Woodbury, Allan D
2002-01-01
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.
NASA Astrophysics Data System (ADS)
Sheldrake, T. E.; Aspinall, W. P.; Odbert, H. M.; Wadge, G.; Sparks, R. S. J.
2017-07-01
Following a cessation in eruptive activity it is important to understand how a volcano will behave in the future and when it may next erupt. Such an assessment can be based on the volcano's long-term pattern of behaviour and insights into its current state via monitoring observations. We present a Bayesian network that integrates these two strands of evidence to forecast future eruptive scenarios using expert elicitation. The Bayesian approach provides a framework to quantify the magmatic causes in terms of volcanic effects (i.e., eruption and unrest). In October 2013, an expert elicitation was performed to populate a Bayesian network designed to help forecast future eruptive (in-)activity at Soufrière Hills Volcano. The Bayesian network was devised to assess the state of the shallow magmatic system, as a means to forecast the future eruptive activity in the context of the long-term behaviour at similar dome-building volcanoes. The findings highlight coherence amongst experts when interpreting the current behaviour of the volcano, but reveal considerable ambiguity when relating this to longer patterns of volcanism at dome-building volcanoes, as a class. By asking questions in terms of magmatic causes, the Bayesian approach highlights the importance of using short-term unrest indicators from monitoring data as evidence in long-term forecasts at volcanoes. Furthermore, it highlights potential biases in the judgements of volcanologists and identifies sources of uncertainty in terms of magmatic causes rather than scenario-based outcomes.
Cosmological study with galaxy clusters detected by the Sunyaev-Zel'dovich effect
NASA Astrophysics Data System (ADS)
Mak, Suet-Ying
In this work, we present various studies to forecast the power of the galaxy clusters detected by the Sunyaev-Zel'dovich (SZ) effect in constraining cosmological models. The SZ effect is regarded as one of the new and promising technique to identify and study cluster physics. With the latest data being released in recent years from the SZ telescopes, it is essential to explore their potentials in providing cosmological information and investigate their relative strengths with respect to galaxy cluster data from X-ray and optical, as well as other cosmological probes such as Cosmic Microwave Background (CMB). One of the topics regard resolving the debate on the existence of an anomalous large scale bulk flow as measured from the kinetic SZ signal of galaxy clusters in the WMAP CMB data. We predict that if such measurement is done with the latest CMB data from the Planck satellite, the sensitivity will be improved by a factor of >5 and thus be able to provide an independent view of its existence. As it turns out, the Planck data, when using the technique developed in this work, find that the observed bulk flow amplitude is consistent with those expected from the LambdaCDM, which is in clear contradiction to the previous claim of a significant bulk flow detection in the WMAP data. We also forecast on the capability of the ongoing and future cluster surveys identified through thermal SZ (tSZ) in constraining three extended models to the LambdaCDM model: modified gravity f( R) model, primordial non-Gaussianity of density perturbation, and the presence of massive neutrinos. We do so by employing their effects on the cluster number count and power spectrum and using Fisher Matrix analysis to estimate the errors on the model parameters. We find that SZ cluster surveys can provide vital complementary information to those expected from non-cluster probes. Our results therefore give the confidence for pursuing these extended cosmological models with SZ clusters.
Carnegie Hubble Program: A Mid-Infrared Calibration of the Hubble Constant
NASA Technical Reports Server (NTRS)
Freedman, Wendy L.; Madore, Barry F.; Scowcroft, Victoria; Burns, Chris; Monson, Andy; Persson, S. Eric; Seibert, Mark; Rigby, Jane
2012-01-01
Using a mid-infrared calibration of the Cepheid distance scale based on recent observations at 3.6 micrometers with the Spitzer Space Telescope, we have obtained a new, high-accuracy calibration of the Hubble constant. We have established the mid-IR zero point of the Leavitt law (the Cepheid period-luminosity relation) using time-averaged 3.6 micrometers data for 10 high-metallicity, MilkyWay Cepheids having independently measured trigonometric parallaxes. We have adopted the slope of the PL relation using time-averaged 3.6micrometers data for 80 long-period Large Magellanic Cloud (LMC) Cepheids falling in the period range 0.8 < log(P) < 1.8.We find a new reddening-corrected distance to the LMC of 18.477 +/- 0.033 (systematic) mag. We re-examine the systematic uncertainties in H(sub 0), also taking into account new data over the past decade. In combination with the new Spitzer calibration, the systematic uncertainty in H(sub 0) over that obtained by the Hubble Space Telescope Key Project has decreased by over a factor of three. Applying the Spitzer calibration to the Key Project sample, we find a value of H(sub 0) = 74.3 with a systematic uncertainty of +/-2.1 (systematic) kilometers per second Mpc(sup -1), corresponding to a 2.8% systematic uncertainty in the Hubble constant. This result, in combination with WMAP7measurements of the cosmic microwave background anisotropies and assuming a flat universe, yields a value of the equation of state for dark energy, w(sub 0) = -1.09 +/- 0.10. Alternatively, relaxing the constraints on flatness and the numbers of relativistic species, and combining our results with those of WMAP7, Type Ia supernovae and baryon acoustic oscillations yield w(sub 0) = -1.08 +/- 0.10 and a value of N(sub eff) = 4.13 +/- 0.67, mildly consistent with the existence of a fourth neutrino species.
The Atacama Cosmology Telescope: cosmological parameters from three seasons of data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sievers, Jonathan L.; Appel, John William; Hlozek, Renée A.
2013-10-01
We present constraints on cosmological and astrophysical parameters from high-resolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power ℓ{sup 2}C{sub ℓ}/2π of the thermal SZmore » power spectrum at 148 GHz is measured to be 3.4±1.4 μK{sup 2} at ℓ = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 μK{sup 2}. Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N{sub eff} = 2.79±0.56, in agreement with the canonical value of N{sub eff} = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be Σm{sub ν} < 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y{sub p} = 0.225±0.034, and measure no variation in the fine structure constant α since recombination, with α/α{sub 0} = 1.004±0.005. We also find no evidence for any running of the scalar spectral index, dn{sub s}/dln k = −0.004±0.012.« less
The Atacama Cosmology Telescope: Cosmological Parameters from Three Seasons of Data
NASA Technical Reports Server (NTRS)
Seivers, Jonathan L.; Hlozek, Renee A.; Nolta, Michael R.; Acquaviva, Viviana; Addison, Graeme E.; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe;
2013-01-01
We present constraints on cosmological and astrophysical parameters from highresolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power l(sup 2)C(sub l)/2pi of the thermal SZ power spectrum at 148 GHz is measured to be 3.4 +/- 1.4 micro-K(sup 2) at l = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 micro-K(sup 2). Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N(sub eff) = 2.79 +/- 0.56, in agreement with the canonical value of N(sub eff) = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be sigma(m?) is less than 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y(sub p) = 0.225 +/- 0.034, and measure no variation in the fine structure constant alpha since recombination, with alpha/alpha(sub 0) = 1.004 +/- 0.005. We also find no evidence for any running of the scalar spectral index, derivative(n(sub s))/derivative(ln k) = -0.004 +/- 0.012.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
A Bayesian Model of the Memory Colour Effect
Olkkonen, Maria; Gegenfurtner, Karl R.
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874
BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliadis, C.; Anderson, K. S.; Coc, A.
The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.
2004-01-01
This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
Robust Learning of High-dimensional Biological Networks with Bayesian Networks
NASA Astrophysics Data System (ADS)
Nägele, Andreas; Dejori, Mathäus; Stetter, Martin
Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.
NASA Astrophysics Data System (ADS)
Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick
2017-09-01
Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
NASA Astrophysics Data System (ADS)
Piccirilli, M. P.; Landau, S. J.; León, G.
2016-08-01
The cosmic microwave background radiation is one of the most powerful tools to study the early Universe and its evolution, providing also a method to test different cosmological scenarios. We consider alternative inflationary models where the emergence of the seeds of cosmic structure from a perfect isotropic and homogeneous universe can be explained by the self-induced collapse of the inflaton wave function. Some of these alternative models may result indistinguishable from the standard model, while others require to be compared with observational data through statistical analysis. In this article we show results concerning the first Planck release, the Atacama Cosmology Telescope, the South Pole Telescope, the WMAP and Sloan Digital Sky Survey datasets, reaching good agreement between data and theoretical predictions. For future works, we aim to achieve better limits in the cosmological parameters using the last Planck release.
Neutrino Mass Bounds from 0{nu}{beta}{beta} Decays and Large Scale Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keum, Y.-Y.; Department of Physics, National Taiwan University, Taipei, Taiwan 10672; Ichiki, K.
2008-05-21
We investigate the way how the total mass sum of neutrinos can be constrained from the neutrinoless double beta decay and cosmological probes with cosmic microwave background (WMAP 3-year results), large scale structures including 2dFGRS and SDSS data sets. First we discuss, in brief, on the current status of neutrino mass bounds from neutrino beta decays and cosmic constrain within the flat {lambda}CMD model. In addition, we explore the interacting neutrino dark-energy model, where the evolution of neutrino masses is determined by quintessence scalar filed, which is responsable for cosmic acceleration today. Assuming the flatness of the universe, the constraintmore » we can derive from the current observation is {sigma}m{sub {nu}}<0.87 eV at the 95% confidence level, which is consistent with {sigma}m{sub {nu}}<0.68 eV in the flat {lambda}CDM model.« less
NASA Technical Reports Server (NTRS)
Gorski, K. M.; Hivon, Eric; Banday, A. J.; Wandelt, Benjamin D.; Hansen, Frode K.; Reinecke, Mstvos; Bartelmann, Matthia
2005-01-01
HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.
Large-Scale Corrections to the CMB Anisotropy from Asymptotic de Sitter Mode
NASA Astrophysics Data System (ADS)
Sojasi, A.
2018-01-01
In this study, large-scale effects from asymptotic de Sitter mode on the CMB anisotropy are investigated. Besides the slow variation of the Hubble parameter onset of the last stage of inflation, the recent observational constraints from Planck and WMAP on spectral index confirm that the geometry of the universe can not be pure de Sitter in this era. Motivated by these evidences, we use this mode to calculate the power spectrum of the CMB anisotropy on the large scale. It is found that the CMB spectrum is dependent on the index of Hankel function ν which in the de Sitter limit ν → 3/2, the power spectrum reduces to the scale invariant result. Also, the result shows that the spectrum of anisotropy is dependent on angular scale and slow-roll parameter and these additional corrections are swept away by a cutoff scale parameter H ≪ M ∗ < M P .
Planck 2015 results. XXV. Diffuse low-frequency Galactic foregrounds
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Orlando, E.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vidal, M.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent with a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in Iν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H II regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5-20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the "Fermi bubble/microwave haze", making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. Finally, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.
Planck 2015 results: XXV. Diffuse low-frequency Galactic foregrounds
Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; ...
2016-09-20
In this paper, we discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent withmore » a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in I ν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H ii regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5–20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the “Fermi bubble/microwave haze”, making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. In conclusion, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.« less
The Atacama Cosmology Telescope: Cosmological Parameters from the 2008 Power Spectrum
NASA Technical Reports Server (NTRS)
Dunkley, J.; Hlozek, R.; Sievers, J.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Battistelli, E. S.;
2011-01-01
We present cosmological parameters derived from the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz and 218 GHz over 296 deg(exp 2) with the Atacama Cosmology Telescope (ACT) during its 2008 season. ACT measures fluctuations at scales 500 < l < 10,000. We fit a model for the lensed CMB, Sunyaev-Zel'dovich (SZ), and foreground contribution to the 148 GHz and 218 GHz power spectra, including thermal and kinetic SZ, Poisson power from radio and infrared point sources, and clustered power from infrared point sources. At l = 3000, about half the power at 148 GHz comes from primary CMB after masking bright radio sources. The power from thermal and kinetic SZ is estimated to be Beta(sub 3000) is identical to 6.8 +/- 2.9 mu K (exp 2), where Beta (sub l) is identical to l(l + 1) C(sub l)/2pi. The IR Poisson power at 148 GHz is Bewta(sub 3000) 7.8 +/- 0.7 muK(exp 2) (C(sub l) = 5.5 +/- 0.5 nK(exp 2)), and a clustered IR component is required with Beta (sub 3000) = 4.6 +/- 0.9 muK(exp 2), assuming an analytic model for its power spectrum shape. At 218 GHz only about 15% of the power, approximately 27 mu K(exp 2), is CMB anisotropy at l = 3000. The remaining 85% is attributed to IR sources (approximately 50% Poisson and 35% clustered), with spectral index alpha = 3.69 +/- 0.14 for flux scaling as S(nu) varies as nu(sup alpha). We estimate primary cosmological parameters from the less contaminated 148 GHz spectrum, marginalizing over SZ and source power. The ACDM cosmological model is a good fit to the data (chi square/dof = 29/46), and ACDM parameters estimated from ACT+Wilkinson Microwave Anisotropy Probe (WMAP) are consistent with the seven-year WMAP limits, with scale invariant n(sub s) = 1 excluded at 99.7% confidence level (CL) (3 sigma). A model with no CMB lensing is disfavored at 2.8 sigma. By measuring the third to seventh acoustic peaks, and probing the Silk damping regime, the ACT data improve limits on cosmological parameters that affect the small-scale CMB power. The ACT data combined with WMAP give a 6 sigma detection of primordial helium, with Y(sub p) = 0.313 +/- 0.044, and a 4 sigma detection of relativistic species, assumed to be neutrinos, with N(sub eff) = 5.3 +/- 1.3 (4.6 +/- 0.8 with BAO+H(sub 0) data). From the CMB alone the running of the spectral index is constrained to be d(sub s) / d ln k = -0,034 +/- 0,018, the limit on the tensor-to-scalar ratio is r < 0,25 (95% CL), and the possible contribution of Nambu cosmic strings to the power spectrum is constrained to string tension G(sub mu) < 1.6 x 10(exp -7) (95% CL),
Planck 2015 results: XXV. Diffuse low-frequency Galactic foregrounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.
In this paper, we discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent withmore » a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in I ν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H ii regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5–20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the “Fermi bubble/microwave haze”, making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. In conclusion, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.« less
Bayesian Logic Programs for Plan Recognition and Machine Reading
2012-12-01
models is that they can handle both uncertainty and structured/ relational data. As a result, they are widely used in domains like social network...data. As a result, they are widely used in domains like social net- work analysis, biological data analysis, and natural language processing. Bayesian...the Story Understanding data set. (b) The logical representation of the observations. (c) The set of ground rules obtained from logical abduction
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
NASA Technical Reports Server (NTRS)
Gilkey, Kelly M.; Myers, Jerry G.; McRae, Michael P.; Griffin, Elise A.; Kallrui, Aditya S.
2012-01-01
The Exploration Medical Capability project is creating a catalog of risk assessments using the Integrated Medical Model (IMM). The IMM is a software-based system intended to assist mission planners in preparing for spaceflight missions by helping them to make informed decisions about medical preparations and supplies needed for combating and treating various medical events using Probabilistic Risk Assessment. The objective is to use statistical analyses to inform the IMM decision tool with estimated probabilities of medical events occurring during an exploration mission. Because data regarding astronaut health are limited, Bayesian statistical analysis is used. Bayesian inference combines prior knowledge, such as data from the general U.S. population, the U.S. Submarine Force, or the analog astronaut population located at the NASA Johnson Space Center, with observed data for the medical condition of interest. The posterior results reflect the best evidence for specific medical events occurring in flight. Bayes theorem provides a formal mechanism for combining available observed data with data from similar studies to support the quantification process. The IMM team performed Bayesian updates on the following medical events: angina, appendicitis, atrial fibrillation, atrial flutter, dental abscess, dental caries, dental periodontal disease, gallstone disease, herpes zoster, renal stones, seizure, and stroke.
NASA Astrophysics Data System (ADS)
Perkins, S. J.; Marais, P. C.; Zwart, J. T. L.; Natarajan, I.; Tasse, C.; Smirnov, O.
2015-09-01
We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. χ2 values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model. As most of the elements of the RIME and χ2 calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple χ2 values. Modified model parameters are transferred to the GPU between each iteration. We implemented Montblanc as a Python package based upon NVIDIA's CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc's RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MEQTREES on a dual hexacore Intel E5-2620v2 CPU. Compared to the OSKAR simulator's GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision floating point respectively. However, OSKAR's RIME implementation is more general than Montblanc's BIRO-tailored RIME. Theoretical analysis of Montblanc's dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 caches.
Caudek, Corrado; Fantoni, Carlo; Domini, Fulvio
2011-01-01
We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information. PMID:21533197
A Bayesian connectivity-based approach to constructing probabilistic gene regulatory networks.
Zhou, Xiaobo; Wang, Xiaodong; Pal, Ranadip; Ivanov, Ivan; Bittner, Michael; Dougherty, Edward R
2004-11-22
We have hypothesized that the construction of transcriptional regulatory networks using a method that optimizes connectivity would lead to regulation consistent with biological expectations. A key expectation is that the hypothetical networks should produce a few, very strong attractors, highly similar to the original observations, mimicking biological state stability and determinism. Another central expectation is that, since it is expected that the biological control is distributed and mutually reinforcing, interpretation of the observations should lead to a very small number of connection schemes. We propose a fully Bayesian approach to constructing probabilistic gene regulatory networks (PGRNs) that emphasizes network topology. The method computes the possible parent sets of each gene, the corresponding predictors and the associated probabilities based on a nonlinear perceptron model, using a reversible jump Markov chain Monte Carlo (MCMC) technique, and an MCMC method is employed to search the network configurations to find those with the highest Bayesian scores to construct the PGRN. The Bayesian method has been used to construct a PGRN based on the observed behavior of a set of genes whose expression patterns vary across a set of melanoma samples exhibiting two very different phenotypes with respect to cell motility and invasiveness. Key biological features have been faithfully reflected in the model. Its steady-state distribution contains attractors that are either identical or very similar to the states observed in the data, and many of the attractors are singletons, which mimics the biological propensity to stably occupy a given state. Most interestingly, the connectivity rules for the most optimal generated networks constituting the PGRN are remarkably similar, as would be expected for a network operating on a distributed basis, with strong interactions between the components.
Magnetised Strings in Λ-Dominated Anisotropic Universe
NASA Astrophysics Data System (ADS)
Goswami, G. K.; Yadav, Anil Kumar; Dewangan, R. N.
2016-11-01
In this paper, we have searched the existence of Λ-dominated anisotropic universe filled with magnetized strings. The observed acceleration of universe has been explained by introducing a positive cosmological constant Λ in the Einstein's field equation which is mathematically equivalent to dark energy with equation of state (EOS) parameter set equal to -1. The present values of the matter and the dark energy parameters (Ω m )0 & (ΩΛ)0 are estimated for high red shift (.3 ≤ z ≤ 1.4) SN Ia supernova data's of observed apparent magnitude along with their possible error taken from Union 2.1 compilation. It is found that the best fit value for (Ω m )0 & (ΩΛ)0 are 0.2920 & 0.7076 respectively which are in good agreement with recent astrophysical observations in the latest surveys like WMAP and Plank. Various physical parameters such as the matter and dark energy densities, the present age of the universe and the present value of deceleration parameter have been obtained on the basis of the values of (Ω m )0 & (ΩΛ)0.Also, we have estimated that the acceleration would have begun in the past at z = 0.6845 i. e. 6.2341 Gyrs before from now.
Bayesian network learning for natural hazard assessments
NASA Astrophysics Data System (ADS)
Vogel, Kristin
2016-04-01
Even though quite different in occurrence and consequences, from a modelling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding. On top of the uncertainty about the modelling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Thus, for reliable natural hazard assessments it is crucial not only to capture and quantify involved uncertainties, but also to express and communicate uncertainties in an intuitive way. Decision-makers, who often find it difficult to deal with uncertainties, might otherwise return to familiar (mostly deterministic) proceedings. In the scope of the DFG research training group „NatRiskChange" we apply the probabilistic framework of Bayesian networks for diverse natural hazard and vulnerability studies. The great potential of Bayesian networks was already shown in previous natural hazard assessments. Treating each model component as random variable, Bayesian networks aim at capturing the joint distribution of all considered variables. Hence, each conditional distribution of interest (e.g. the effect of precautionary measures on damage reduction) can be inferred. The (in-)dependencies between the considered variables can be learned purely data driven or be given by experts. Even a combination of both is possible. By translating the (in-)dependences into a graph structure, Bayesian networks provide direct insights into the workings of the system and allow to learn about the underlying processes. Besides numerous studies on the topic, learning Bayesian networks from real-world data remains challenging. In previous studies, e.g. on earthquake induced ground motion and flood damage assessments, we tackled the problems arising with continuous variables and incomplete observations. Further studies rise the challenge of relying on very small data sets. Since parameter estimates for complex models based on few observations are unreliable, it is necessary to focus on simplified, yet still meaningful models. A so called Markov Blanket approach is developed to identify the most relevant model components and to construct a simple Bayesian network based on those findings. Since the proceeding is completely data driven, it can easily be transferred to various applications in natural hazard domains. This study is funded by the Deutsche Forschungsgemeinschaft (DFG) within the research training programme GRK 2043/1 "NatRiskChange - Natural hazards and risks in a changing world" at Potsdam University.
Reducing uncertainties in decadal variability of the global carbon budget with multiple datasets
Li, Wei; Ciais, Philippe; Wang, Yilong; Peng, Shushi; Broquet, Grégoire; Ballantyne, Ashley P.; Canadell, Josep G.; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Peters, Glen P.; Piao, Shilong; Pongratz, Julia
2016-01-01
Conventional calculations of the global carbon budget infer the land sink as a residual between emissions, atmospheric accumulation, and the ocean sink. Thus, the land sink accumulates the errors from the other flux terms and bears the largest uncertainty. Here, we present a Bayesian fusion approach that combines multiple observations in different carbon reservoirs to optimize the land (B) and ocean (O) carbon sinks, land use change emissions (L), and indirectly fossil fuel emissions (F) from 1980 to 2014. Compared with the conventional approach, Bayesian optimization decreases the uncertainties in B by 41% and in O by 46%. The L uncertainty decreases by 47%, whereas F uncertainty is marginally improved through the knowledge of natural fluxes. Both ocean and net land uptake (B + L) rates have positive trends of 29 ± 8 and 37 ± 17 Tg C⋅y−2 since 1980, respectively. Our Bayesian fusion of multiple observations reduces uncertainties, thereby allowing us to isolate important variability in global carbon cycle processes. PMID:27799533
Bayesian modeling of cue interaction: bistability in stereoscopic slant perception.
van Ee, Raymond; Adams, Wendy J; Mamassian, Pascal
2003-07-01
Our two eyes receive different views of a visual scene, and the resulting binocular disparities enable us to reconstruct its three-dimensional layout. However, the visual environment is also rich in monocular depth cues. We examined the resulting percept when observers view a scene in which there are large conflicts between the surface slant signaled by binocular disparities and the slant signaled by monocular perspective. For a range of disparity-perspective cue conflicts, many observers experience bistability: They are able to perceive two distinct slants and to flip between the two percepts in a controlled way. We present a Bayesian model that describes the quantitative aspects of perceived slant on the basis of the likelihoods of both perspective and disparity slant information combined with prior assumptions about the shape and orientation of objects in the scene. Our Bayesian approach can be regarded as an overarching framework that allows researchers to study all cue integration aspects-including perceptual decisions--in a unified manner.
Features in the primordial spectrum from WMAP: A wavelet analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Souradeep, Tarun; Manimaran, P.
2007-06-15
Precise measurements of the anisotropies in the cosmic microwave background enable us to do an accurate study on the form of the primordial power spectrum for a given set of cosmological parameters. In a previous paper [A. Shafieloo and T. Souradeep, Phys. Rev. D 70, 043523 (2004).], we implemented an improved (error sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the first year of WMAP data to determine the primordial power spectrum assuming a concordance cosmological model. This recovered spectrum has a likelihood far better than a scale invariant, or, 'best fit' scale free spectra ({delta}lnL{approx_equal}25 withmore » respect to the Harrison-Zeldovich spectrum, and, {delta}lnL{approx_equal}11 with respect to the power law spectrum with n{sub s}=0.95). In this paper we use the discrete wavelet transform (DWT) to decompose the local features of the recovered spectrum individually to study their effect and significance on the recovered angular power spectrum and hence the likelihood. We show that besides the infrared cutoff at the horizon scale, the associated features of the primordial power spectrum around the horizon have a significant effect on improving the likelihood. The strong features are localized at the horizon scale.« less
Information gains from cosmic microwave background experiments
NASA Astrophysics Data System (ADS)
Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël
2014-07-01
To shed light on the fundamental problems posed by dark energy and dark matter, a large number of experiments have been performed and combined to constrain cosmological models. We propose a novel way of quantifying the information gained by updates on the parameter constraints from a series of experiments which can either complement earlier measurements or replace them. For this purpose, we use the Kullback-Leibler divergence or relative entropy from information theory to measure differences in the posterior distributions in model parameter space from a pair of experiments. We apply this formalism to a historical series of cosmic microwave background experiments ranging from Boomerang to WMAP, SPT, and Planck. Considering different combinations of these experiments, we thus estimate the information gain in units of bits and distinguish contributions from the reduction of statistical errors and the "surprise" corresponding to a significant shift of the parameters' central values. For this experiment series, we find individual relative entropy gains ranging from about 1 to 30 bits. In some cases, e.g. when comparing WMAP and Planck results, we find that the gains are dominated by the surprise rather than by improvements in statistical precision. We discuss how this technique provides a useful tool for both quantifying the constraining power of data from cosmological probes and detecting the tensions between experiments.
NASA Astrophysics Data System (ADS)
Perera, B. B. P.; Stappers, B. W.; Babak, S.; Keith, M. J.; Antoniadis, J.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lazarus, P.; Lentati, L.; Liu, K.; Lyne, A. G.; McKee, J. W.; Osłowski, S.; Perrodin, D.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Theureau, G.; Verbiest, J. P. W.; Taylor, S. R.
2018-07-01
We search for continuous gravitational waves (CGWs) produced by individual supermassive black hole binaries in circular orbits using high-cadence timing observations of PSR J1713+0747. We observe this millisecond pulsar using the telescopes in the European Pulsar Timing Array with an average cadence of approximately 1.6 d over the period between 2011 April and 2015 July, including an approximately daily average between 2013 February and 2014 April. The high-cadence observations are used to improve the pulsar timing sensitivity across the gravitational wave frequency range of 0.008-5μHz. We use two algorithms in the analysis, including a spectral fitting method and a Bayesian approach. For an independent comparison, we also use a previously published Bayesian algorithm. We find that the Bayesian approaches provide optimal results and the timing observations of the pulsar place a 95 per cent upper limit on the sky-averaged strain amplitude of CGWs to be ≲3.5 × 10-13 at a reference frequency of 1 μHz. We also find a 95 per cent upper limit on the sky-averaged strain amplitude of low-frequency CGWs to be ≲1.4 × 10-14 at a reference frequency of 20 nHz.
NASA Astrophysics Data System (ADS)
Perera, B. B. P.; Stappers, B. W.; Babak, S.; Keith, M. J.; Antoniadis, J.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lazarus, P.; Lentati, L.; Liu, K.; Lyne, A. G.; McKee, J. W.; Osłowski, S.; Perrodin, D.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Theureau, G.; Verbiest, J. P. W.; Taylor, S. R.
2018-05-01
We search for continuous gravitational waves (CGWs) produced by individual super-massive black-hole binaries (SMBHBs) in circular orbits using high-cadence timing observations of PSR J1713+0747. We observe this millisecond pulsar using the telescopes in the European Pulsar Timing Array (EPTA) with an average cadence of approximately 1.6 days over the period between April 2011 and July 2015, including an approximately daily average between February 2013 and April 2014. The high-cadence observations are used to improve the pulsar timing sensitivity across the GW frequency range of 0.008 - 5 μHz. We use two algorithms in the analysis, including a spectral fitting method and a Bayesian approach. For an independent comparison, we also use a previously published Bayesian algorithm. We find that the Bayesian approaches provide optimal results and the timing observations of the pulsar place a 95 per cent upper limit on the sky-averaged strain amplitude of CGWs to be ≲ 3.5 × 10-13 at a reference frequency of 1 μHz. We also find a 95 per cent upper limit on the sky-averaged strain amplitude of low-frequency CGWs to be ≲ 1.4 × 10-14 at a reference frequency of 20 nHz.
Bayesian networks in neuroscience: a survey.
Bielza, Concha; Larrañaga, Pedro
2014-01-01
Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.
Bayesian networks in neuroscience: a survey
Bielza, Concha; Larrañaga, Pedro
2014-01-01
Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied. PMID:25360109
Application of Bayesian Networks to hindcast barrier island morphodynamics
Wilson, Kathleen E.; Adams, Peter N.; Hapke, Cheryl J.; Lentz, Erika E.; Brenner, Owen T.
2015-01-01
We refine a preliminary Bayesian Network by 1) increasing model experience through additional observations, 2) including anthropogenic modification history, and 3) replacing parameterized wave impact values with maximum run-up elevation. Further, we develop and train a pair of generalized models with an additional dataset encompassing a different storm event, which expands the observations beyond our hindcast objective. We compare the skill of the generalized models against the Nor'Ida specific model formulation, balancing the reduced skill with an expectation of increased transferability. Results of Nor'Ida hindcasts ranged in skill from 0.37 to 0.51 and accuracy of 65.0 to 81.9%.
Partitioning Ocean Wave Spectra Obtained from Radar Observations
NASA Astrophysics Data System (ADS)
Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine
2016-08-01
2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.
The 2.3 GHz continuum survey of the GEM project
NASA Astrophysics Data System (ADS)
Tello, C.; Villela, T.; Torres, S.; Bersanelli, M.; Smoot, G. F.; Ferreira, I. S.; Cingoz, A.; Lamb, J.; Barbosa, D.; Perez-Becker, D.; Ricciardi, S.; Currivan, J. A.; Platania, P.; Maino, D.
2013-08-01
Context. Determining the spectral and spatial characteristics of the radio continuum of our Galaxy is an experimentally challenging endeavour for improving our understanding of the astrophysics of the interstellar medium. This knowledge has also become of paramount significance for cosmology, since Galactic emission is the main source of astrophysical contamination in measurements of the cosmic microwave background (CMB) radiation on large angular scales. Aims: We present a partial-sky survey of the radio continuum at 2.3GHz within the scope of the Galactic Emission Mapping (GEM) project, an observational program conceived and developed to reveal the large-scale properties of Galactic synchrotron radiation through a set of self-consistent surveys of the radio continuum between 408MHz and 10GHz. Methods: The GEM experiment uses a portable and double-shielded 5.5-m radiotelescope in altazimuthal configuration to map 60-degree-wide declination bands from different observational sites by circularly scanning the sky at zenithal angles of 30° from a constantly rotating platform. The observations were accomplished with a total power receiver, whose front-end high electron mobility transistor (HEMT) amplifier was matched directly to a cylindrical horn at the prime focus of the parabolic reflector. The Moon was used to calibrate the antenna temperature scale and the preparation of the map required direct subtraction and destriping algorithms to remove ground contamination as the most significant source of systematic error. Results: We used 484 h of total intensity observations from two locations in Colombia and Brazil to yield 66% sky coverage from to . The observations in Colombia were obtained with a horizontal HPBW of and a vertical HPBW of . The pointing accuracy was and the RMS sensitivity was 11.42 mK. The observations in Brazil were obtained with a horizontal HPBW of and a vertical HPBW of . The pointing accuracy was and the RMS sensitivity was 8.24 mK. The zero-level uncertainty of the combined survey is 103mK with a temperature scale error of 5% after direct correlation with the Rhodes/HartRAO survey at 2326MHz on a T-T plot. Conclusions: The sky brightness distribution into regions of low and high emission in the GEM survey is consistent with the appearance of a transition region as seen in the Haslam 408MHz and WMAP K-band surveys. Preliminary results also show that the temperature spectral index between 408MHz and the 2.3GHz band of the GEM survey has a weak spatial correlation with these regions; but it steepens significantly from high to low emission regions with respect to the WMAP K-band survey. The survey is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/556/A1
Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site
NASA Astrophysics Data System (ADS)
Wang, Yu; Aladejare, Adeyemi Emman
2016-09-01
Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.
A Bayesian CUSUM plot: Diagnosing quality of treatment.
Rosthøj, Steen; Jacobsen, Rikke-Line
2017-12-01
To present a CUSUM plot based on Bayesian diagnostic reasoning displaying evidence in favour of "healthy" rather than "sick" quality of treatment (QOT), and to demonstrate a technique using Kaplan-Meier survival curves permitting application to case series with ongoing follow-up. For a case series with known final outcomes: Consider each case a diagnostic test of good versus poor QOT (expected vs. increased failure rates), determine the likelihood ratio (LR) of the observed outcome, convert LR to weight taking log to base 2, and add up weights sequentially in a plot showing how many times odds in favour of good QOT have been doubled. For a series with observed survival times and an expected survival curve: Divide the curve into time intervals, determine "healthy" and specify "sick" risks of failure in each interval, construct a "sick" survival curve, determine the LR of survival or failure at the given observation times, convert to weights, and add up. The Bayesian plot was applied retrospectively to 39 children with acute lymphoblastic leukaemia with completed follow-up, using Nordic collaborative results as reference, showing equal odds between good and poor QOT. In the ongoing treatment trial, with 22 of 37 children still at risk for event, QOT has been monitored with average survival curves as reference, odds so far favoring good QOT 2:1. QOT in small patient series can be assessed with a Bayesian CUSUM plot, retrospectively when all treatment outcomes are known, but also in ongoing series with unfinished follow-up. © 2017 John Wiley & Sons, Ltd.
Whose statistical reasoning is facilitated by a causal structure intervention?
McNair, Simon; Feeney, Aidan
2015-02-01
People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430-450, 2007) proposed that a causal Bayesian framework accounts for peoples' errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
Bayesian Analysis of the Association between Family-Level Factors and Siblings' Dental Caries.
Wen, A; Weyant, R J; McNeil, D W; Crout, R J; Neiswanger, K; Marazita, M L; Foxman, B
2017-07-01
We conducted a Bayesian analysis of the association between family-level socioeconomic status and smoking and the prevalence of dental caries among siblings (children from infant to 14 y) among children living in rural and urban Northern Appalachia using data from the Center for Oral Health Research in Appalachia (COHRA). The observed proportion of siblings sharing caries was significantly different from predicted assuming siblings' caries status was independent. Using a Bayesian hierarchical model, we found the inclusion of a household factor significantly improved the goodness of fit. Other findings showed an inverse association between parental education and siblings' caries and a positive association between households with smokers and siblings' caries. Our study strengthens existing evidence suggesting that increased parental education and decreased parental cigarette smoking are associated with reduced childhood caries in the household. Our results also demonstrate the value of a Bayesian approach, which allows us to include household as a random effect, thereby providing more accurate estimates than obtained using generalized linear mixed models.
A Bayesian Approach to More Stable Estimates of Group-Level Effects in Contextual Studies.
Zitzmann, Steffen; Lüdtke, Oliver; Robitzsch, Alexander
2015-01-01
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.
BANYAN_Sigma: Bayesian classifier for members of young stellar associations
NASA Astrophysics Data System (ADS)
Gagné, Jonathan; Mamajek, Eric E.; Malo, Lison; Riedel, Adric; Rodriguez, David; Lafrenière, David; Faherty, Jacqueline K.; Roy-Loubier, Olivier; Pueyo, Laurent; Robin, Annie C.; Doyon, René
2018-01-01
BANYAN_Sigma calculates the membership probability that a given astrophysical object belongs to one of the currently known 27 young associations within 150 pc of the Sun, using Bayesian inference. This tool uses the sky position and proper motion measurements of an object, with optional radial velocity (RV) and distance (D) measurements, to derive a Bayesian membership probability. By default, the priors are adjusted such that a probability threshold of 90% will recover 50%, 68%, 82% or 90% of true association members depending on what observables are input (only sky position and proper motion, with RV, with D, with both RV and D, respectively). The algorithm is implemented in a Python package, in IDL, and is also implemented as an interactive web page.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
Reasoning and choice in the Monty Hall Dilemma (MHD): implications for improving Bayesian reasoning
Tubau, Elisabet; Aguilar-Lleyda, David; Johnson, Eric D.
2015-01-01
The Monty Hall Dilemma (MHD) is a two-step decision problem involving counterintuitive conditional probabilities. The first choice is made among three equally probable options, whereas the second choice takes place after the elimination of one of the non-selected options which does not hide the prize. Differing from most Bayesian problems, statistical information in the MHD has to be inferred, either by learning outcome probabilities or by reasoning from the presented sequence of events. This often leads to suboptimal decisions and erroneous probability judgments. Specifically, decision makers commonly develop a wrong intuition that final probabilities are equally distributed, together with a preference for their first choice. Several studies have shown that repeated practice enhances sensitivity to the different reward probabilities, but does not facilitate correct Bayesian reasoning. However, modest improvements in probability judgments have been observed after guided explanations. To explain these dissociations, the present review focuses on two types of causes producing the observed biases: Emotional-based choice biases and cognitive limitations in understanding probabilistic information. Among the latter, we identify a crucial cause for the universal difficulty in overcoming the equiprobability illusion: Incomplete representation of prior and conditional probabilities. We conclude that repeated practice and/or high incentives can be effective for overcoming choice biases, but promoting an adequate partitioning of possibilities seems to be necessary for overcoming cognitive illusions and improving Bayesian reasoning. PMID:25873906
Bayesian power spectrum inference with foreground and target contamination treatment
NASA Astrophysics Data System (ADS)
Jasche, J.; Lavaux, G.
2017-10-01
This work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power spectra and three-dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block-sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presented ARES framework for Bayesian large-scale structure analyses. As a result, the method infers jointly and fully self-consistently three-dimensional density fields, cosmological power spectra, luminosity-dependent galaxy biases, noise levels of the respective galaxy distributions, and coefficients for a set of a priori specified foreground templates. In addition, this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power spectra via applications to realistic mock galaxy observations that are subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels, our method reliably and robustly infers three-dimensional density fields and corresponding cosmological power spectra from deep galaxy surveys. Furthermore, our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power spectrum. This effect amounts to correlations and anti-correlations of up to 10 per cent across wide ranges in Fourier space.
ERIC Educational Resources Information Center
Ayaburi, Emmanuel Wusuhon Yanibo
2017-01-01
This dissertation investigates the effect of observational learning in crowdsourcing markets as a lens to identify appropriate mechanism(s) for sustaining this increasingly popular business model. Observational learning occurs when crowdsourcing participating agents obtain knowledge from signals they observe in the marketplace and incorporate such…
How Recent History Affects Perception: The Normative Approach and Its Heuristic Approximation
Raviv, Ofri; Ahissar, Merav; Loewenstein, Yonatan
2012-01-01
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments. PMID:23133343
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko
2018-05-31
The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.
Schold, Jesse D; Miller, Charles M; Henry, Mitchell L; Buccini, Laura D; Flechner, Stuart M; Goldfarb, David A; Poggio, Emilio D; Andreoni, Kenneth A
2017-06-01
Scientific Registry of Transplant Recipients report cards of US organ transplant center performance are publicly available and used for quality oversight. Low center performance (LP) evaluations are associated with changes in practice including reduced transplant rates and increased waitlist removals. In 2014, Scientific Registry of Transplant Recipients implemented new Bayesian methodology to evaluate performance which was not adopted by Center for Medicare and Medicaid Services (CMS). In May 2016, CMS altered their performance criteria, reducing the likelihood of LP evaluations. Our aims were to evaluate incidence, survival rates, and volume of LP centers with Bayesian, historical (old-CMS) and new-CMS criteria using 6 consecutive program-specific reports (PSR), January 2013 to July 2015 among adult kidney transplant centers. Bayesian, old-CMS and new-CMS criteria identified 13.4%, 8.3%, and 6.1% LP PSRs, respectively. Over the 3-year period, 31.9% (Bayesian), 23.4% (old-CMS), and 19.8% (new-CMS) of centers had 1 or more LP evaluation. For small centers (<83 transplants/PSR), there were 4-fold additional LP evaluations (52 vs 13 PSRs) for 1-year mortality with Bayesian versus new-CMS criteria. For large centers (>183 transplants/PSR), there were 3-fold additional LP evaluations for 1-year mortality with Bayesian versus new-CMS criteria with median differences in observed and expected patient survival of -1.6% and -2.2%, respectively. A significant proportion of kidney transplant centers are identified as low performing with relatively small survival differences compared with expected. Bayesian criteria have significantly higher flagging rates and new-CMS criteria modestly reduce flagging. Critical appraisal of performance criteria is needed to assess whether quality oversight is meeting intended goals and whether further modifications could reduce risk aversion, more efficiently allocate resources, and increase transplant opportunities.
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
Spertus, Jacob V; Normand, Sharon-Lise T
2018-04-23
High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lo, Benjamin W. Y.; Macdonald, R. Loch; Baker, Andrew; Levine, Mitchell A. H.
2013-01-01
Objective. The novel clinical prediction approach of Bayesian neural networks with fuzzy logic inferences is created and applied to derive prognostic decision rules in cerebral aneurysmal subarachnoid hemorrhage (aSAH). Methods. The approach of Bayesian neural networks with fuzzy logic inferences was applied to data from five trials of Tirilazad for aneurysmal subarachnoid hemorrhage (3551 patients). Results. Bayesian meta-analyses of observational studies on aSAH prognostic factors gave generalizable posterior distributions of population mean log odd ratios (ORs). Similar trends were noted in Bayesian and linear regression ORs. Significant outcome predictors include normal motor response, cerebral infarction, history of myocardial infarction, cerebral edema, history of diabetes mellitus, fever on day 8, prior subarachnoid hemorrhage, admission angiographic vasospasm, neurological grade, intraventricular hemorrhage, ruptured aneurysm size, history of hypertension, vasospasm day, age and mean arterial pressure. Heteroscedasticity was present in the nontransformed dataset. Artificial neural networks found nonlinear relationships with 11 hidden variables in 1 layer, using the multilayer perceptron model. Fuzzy logic decision rules (centroid defuzzification technique) denoted cut-off points for poor prognosis at greater than 2.5 clusters. Discussion. This aSAH prognostic system makes use of existing knowledge, recognizes unknown areas, incorporates one's clinical reasoning, and compensates for uncertainty in prognostication. PMID:23690884
Bayesian approach for counting experiment statistics applied to a neutrino point source analysis
NASA Astrophysics Data System (ADS)
Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.
2013-12-01
In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
NASA Astrophysics Data System (ADS)
Arregui, Iñigo
2018-01-01
In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
Exploiting Cross-sensitivity by Bayesian Decoding of Mixed Potential Sensor Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreller, Cortney
LANL mixed-potential electrochemical sensor (MPES) device arrays were coupled with advanced Bayesian inference treatment of the physical model of relevant sensor-analyte interactions. We demonstrated that our approach could be used to uniquely discriminate the composition of ternary gas sensors with three discreet MPES sensors with an average error of less than 2%. We also observed that the MPES exhibited excellent stability over a year of operation at elevated temperatures in the presence of test gases.
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Huff, Eric Michael
Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.
Assessment of CT image quality using a Bayesian approach
NASA Astrophysics Data System (ADS)
Reginatto, M.; Anton, M.; Elster, C.
2017-08-01
One of the most promising approaches for evaluating CT image quality is task-specific quality assessment. This involves a simplified version of a clinical task, e.g. deciding whether an image belongs to the class of images that contain the signature of a lesion or not. Task-specific quality assessment can be done by model observers, which are mathematical procedures that carry out the classification task. The most widely used figure of merit for CT image quality is the area under the ROC curve (AUC), a quantity which characterizes the performance of a given model observer. In order to estimate AUC from a finite sample of images, different approaches from classical statistics have been suggested. The goal of this paper is to introduce task-specific quality assessment of CT images to metrology and to propose a novel Bayesian estimation of AUC for the channelized Hotelling observer (CHO) applied to the task of detecting a lesion at a known image location. It is assumed that signal-present and signal-absent images follow multivariate normal distributions with the same covariance matrix. The Bayesian approach results in a posterior distribution for the AUC of the CHO which provides in addition a complete characterization of the uncertainty of this figure of merit. The approach is illustrated by its application to both simulated and experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hausegger, Sebastian von; Liu, Hao; Sarkar, Subir
Cosmology has made enormous progress through studies of the cosmic microwave background, however the subtle signals being now sought such as B-mode polarisation due to primordial gravitational waves are increasingly hard to disentangle from residual Galactic foregrounds in the derived CMB maps. We revisit our finding that on large angular scales there are traces of the nearby old supernova remnant Loop I in the WMAP 9-year map of the CMB and confirm this with the new SMICA map from the Planck satellite.
Cosmic microwave background reconstruction from WMAP and Planck PR2 data
NASA Astrophysics Data System (ADS)
Bobin, J.; Sureau, F.; Starck, J.-L.
2016-06-01
We describe a new estimate of the cosmic microwave background (CMB) intensity map reconstructed by a joint analysis of the full Planck 2015 data (PR2) and nine years of WMAP data. The proposed map provides more than a mere update of the CMB map introduced in a previous paper since it benefits from an improvement of the component separation method L-GMCA (Local-Generalized Morphological Component Analysis), which facilitates efficient separation of correlated components. Based on the most recent CMB data, we further confirm previous results showing that the proposed CMB map estimate exhibits appealing characteristics for astrophysical and cosmological applications: I) it is a full-sky map as it did not require any inpainting or interpolation postprocessing; II) foreground contamination is very low even on the galactic center; and III) the map does not exhibit any detectable trace of thermal Sunyaev-Zel'dovich contamination. We show that its power spectrum is in good agreement with the Planck PR2 official theoretical best-fit power spectrum. Finally, following the principle of reproducible research, we provide the codes to reproduce the L-GMCA, which makes it the only reproducible CMB map. The reconstructed CMB map and the code are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A50
Constraints on Dark Energy from Baryon Acoustic Peak and Galaxy Cluster Gas Mass Measurements
NASA Astrophysics Data System (ADS)
Samushia, Lado; Ratra, Bharat
2009-10-01
We use baryon acoustic peak measurements by Eisenstein et al. and Percival et al., together with the Wilkinson Microwave Anisotropy Probe (WMAP) measurement of the apparent acoustic horizon angle, and galaxy cluster gas mass fraction measurements of Allen et al., to constrain a slowly rolling scalar field dark energy model, phiCDM, in which dark energy's energy density changes in time. We also compare our phiCDM results with those derived for two more common dark energy models: the time-independent cosmological constant model, ΛCDM, and the XCDM parameterization of dark energy's equation of state. For time-independent dark energy, the Percival et al. measurements effectively constrain spatial curvature and favor a close to the spatially flat model, mostly due to the WMAP cosmic microwave background prior used in the analysis. In a spatially flat model the Percival et al. data less effectively constrain time-varying dark energy. The joint baryon acoustic peak and galaxy cluster gas mass constraints on the phiCDM model are consistent with but tighter than those derived from other data. A time-independent cosmological constant in a spatially flat model provides a good fit to the joint data, while the α parameter in the inverse power-law potential phiCDM model is constrained to be less than about 4 at 3σ confidence level.
Was Star Formation Suppressed in High-Redshift Minihalos?
NASA Astrophysics Data System (ADS)
Haiman, Zoltán; Bryan, Greg L.
2006-10-01
The primordial gas in the earliest dark matter halos, collapsing at redshifts z~20, with masses Mhalo~106 Msolar and virial temperatures Tvir<104 K, relied on the presence of molecules for cooling. Several theoretical studies have suggested that gas contraction and star formation in these minihalos was suppressed by radiative, chemical, thermal, and dynamical feedback processes. The recent measurement by the Wilkinson Microwave Anisotropy Probe (WMAP) of the optical depth to electron scattering, τ~0.09+/-0.03, provides the first empirical evidence for this suppression. The new WMAP result is consistent with vanilla models of reionization, in which ionizing sources populate cold dark matter halos down to a virial temperature of Tvir=104 K. On the other hand, we show that in order to avoid overproducing the optical depth, the efficiency for the production of ionizing photons in minihalos must have been about an order of magnitude lower than expected from massive metal-free stars and lower than the efficiency in large halos that can cool via atomic hydrogen (Tvir>104 K). This conclusion is insensitive to assumptions about the efficiency of ionizing photon production in the large halos, as long as reionization ends by z=6, as required by the spectra of bright quasars at z<~6. Our conclusion is strengthened if the clumping of the ionized gas evolves with redshift, as suggested by semianalytical predictions and three-dimensional numerical simulations.
E and B families of the Stokes parameters in the polarized synchrotron and thermal dust foregrounds
NASA Astrophysics Data System (ADS)
Liu, Hao; Creswell, James; Naselsky, Pavel
2018-05-01
Better understanding of Galactic foregrounds is one of the main obstacles to detection of primordial gravitational waves through measurement of the B mode in the polarized microwave sky. We generalize the method proposed in [1] and decompose the polarization signals into the E and B families directly in the domain of the Stokes Q, U parameters as (Q,U)≡(QE, UE)+(QB,UB). This also enables an investigation of the morphology and the frequency dependence of these two families, which has been done in the WMAP K, Ka (tracing synchrotron emission) and Planck 2015 HFI maps (tracing thermal dust). The results reveal significant differences in spectra between the E and B families. The spectral index of the E family fluctuates less across the sky than that of the B family, and the same tendency occurs for the polarization angles of the dust and synchrotron channels. The new insight from WMAP and Planck data on the North Polar Spur and BICEP2 zones through our method clearly indicates that these zones are characterized by very low polarization intensity of the B family compared to the E family. We have detected global structure of the B family polarization angles at high Galactic latitudes which cannot be attributed to the cosmic microwave background or instrumental noise. However, we cannot exclude instrumental systematics as a partial contributor to these anomalies.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
NASA Technical Reports Server (NTRS)
Vangelder, B. H. W.
1978-01-01
Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.
A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.
Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F
2016-01-01
Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Dolan, Raymond J.
2016-01-01
The weight with which a specific outcome feature contributes to preference quantifies a person’s ‘taste’ for that feature. However, far from being fixed personality characteristics, tastes are plastic. They tend to align, for example, with those of others even if such conformity is not rewarded. We hypothesised that people can be uncertain about their tastes. Personal tastes are therefore uncertain beliefs. People can thus learn about them by considering evidence, such as the preferences of relevant others, and then performing Bayesian updating. If a person’s choice variability reflects uncertainty, as in random-preference models, then a signature of Bayesian updating is that the degree of taste change should correlate with that person’s choice variability. Temporal discounting coefficients are an important example of taste–for patience. These coefficients quantify impulsivity, have good psychometric properties and can change upon observing others’ choices. We examined discounting preferences in a novel, large community study of 14–24 year olds. We assessed discounting behaviour, including decision variability, before and after participants observed another person’s choices. We found good evidence for taste uncertainty and for Bayesian taste updating. First, participants displayed decision variability which was better accounted for by a random-taste than by a response-noise model. Second, apparent taste shifts were well described by a Bayesian model taking into account taste uncertainty and the relevance of social information. Our findings have important neuroscientific, clinical and developmental significance. PMID:27447491
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
NASA Astrophysics Data System (ADS)
Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.
2017-02-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}⊙ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}⊙ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.
Bayesian analysis of rare events
NASA Astrophysics Data System (ADS)
Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Application of bayesian networks to real-time flood risk estimation
NASA Astrophysics Data System (ADS)
Garrote, L.; Molina, M.; Blasco, G.
2003-04-01
This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models
Long-lived stops in MSSM scenarios with a neutralino LSP
NASA Astrophysics Data System (ADS)
Johansen, M.; Edsjö, J.; Hellman, S.; Milstead, D.
2010-08-01
This work investigates the possibility of a long-lived stop squark in supersymmetric models with the neutralino as the lightest supersymmetric particle (LSP). We study the implications of meta-stable stops on the sparticle mass spectra and the dark matter density. We find that in order to obtain a sufficiently long stop lifetime so as to be observable as a stable R-hadron at an LHC experiment, we need to fine tune the mass degeneracy between the stop and the LSP considerably. This increases the stop-neutralino co-anihilation cross section, leaving the neutralino relic density lower than what is expected from the WMAP results for stop masses ≲1.5 TeV/ c 2. However, if such scenarios are realised in nature we demonstrate that the long-lived stops will be produced at the LHC and that stop-based R-hadrons with masses up to 1 TeV/c2 can be detected after one year of running at design luminosity.
Cosmic microwave background trispectrum and primordial magnetic field limits.
Trivedi, Pranjal; Seshadri, T R; Subramanian, Kandaswamy
2012-06-08
Primordial magnetic fields will generate non-gaussian signals in the cosmic microwave background (CMB) as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. We compute a new measure of magnetic non-gaussianity, the CMB trispectrum, on large angular scales, sourced via the Sachs-Wolfe effect. The trispectra induced by magnetic energy density and by magnetic scalar anisotropic stress are found to have typical magnitudes of approximately a few times 10(-29) and 10(-19), respectively. Observational limits on CMB non-gaussianity from WMAP data allow us to conservatively set upper limits of a nG, and plausibly sub-nG, on the present value of the primordial cosmic magnetic field. This represents the tightest limit so far on the strength of primordial magnetic fields, on Mpc scales, and is better than limits from the CMB bispectrum and all modes in the CMB power spectrum. Thus, the CMB trispectrum is a new and more sensitive probe of primordial magnetic fields on large scales.
Entropy corrected holographic dark energy models in modified gravity
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila
We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ-ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ-ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.
Bockman, Alexander; Fackler, Cameron; Xiang, Ning
2015-04-01
Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.
Evaluation of Oceanic Transport Statistics By Use of Transient Tracers and Bayesian Methods
NASA Astrophysics Data System (ADS)
Trossman, D. S.; Thompson, L.; Mecking, S.; Bryan, F.; Peacock, S.
2013-12-01
Key variables that quantify the time scales over which atmospheric signals penetrate into the oceanic interior and their uncertainties are computed using Bayesian methods and transient tracers from both models and observations. First, the mean residence times, subduction rates, and formation rates of Subtropical Mode Water (STMW) and Subpolar Mode Water (SPMW) in the North Atlantic and Subantarctic Mode Water (SAMW) in the Southern Ocean are estimated by combining a model and observations of chlorofluorocarbon-11 (CFC-11) via Bayesian Model Averaging (BMA), statistical technique that weights model estimates according to how close they agree with observations. Second, a Bayesian method is presented to find two oceanic transport parameters associated with the age distribution of ocean waters, the transit-time distribution (TTD), by combining an eddying global ocean model's estimate of the TTD with hydrographic observations of CFC-11, temperature, and salinity. Uncertainties associated with objectively mapping irregularly spaced bottle data are quantified by making use of a thin-plate spline and then propagated via the two Bayesian techniques. It is found that the subduction of STMW, SPMW, and SAMW is mostly an advective process, but up to about one-third of STMW subduction likely owes to non-advective processes. Also, while the formation of STMW is mostly due to subduction, the formation of SPMW is mostly due to other processes. About half of the formation of SAMW is due to subduction and half is due to other processes. A combination of air-sea flux, acting on relatively short time scales, and turbulent mixing, acting on a wide range of time scales, is likely the dominant SPMW erosion mechanism. Air-sea flux is likely responsible for most STMW erosion, and turbulent mixing is likely responsible for most SAMW erosion. Two oceanic transport parameters, the mean age of a water parcel and the half-variance associated with the TTD, estimated using the model's tracers as data (BayesPOP) and those estimated using tracer observations as data (BayesObs) provide information about the sources of model biases, and give a more nuanced picture than can be found by comparing the simulated CFC-11 concentrations with observed CFC-11 concentrations. Using the differences between the two oceanic transport parameters from BayesObs and those from BayesPOP with and without a constant Peclet number assumption along each of the hydrographic cross-sections considered here, it is found that the model's diffusivity tensor biases lead to larger model errors than the model's mean advection time biases. However, it is also found that mean advection time biases in the model are statistically significant at the 95% level where mode water is found.
Proton-hydrogen collisions for Rydberg n,l-changing transitions in the early Universe
NASA Astrophysics Data System (ADS)
Vrinceanu, Daniel
2013-05-01
Cosmic Microwave Background (CMB) is a vestige radiation generated during the Recombination era, some 390,000 years after the Big Bang, when the Universe had become transparent for the first time. Initial observations of CMB made by the Wilkinson Microwave Anisotropy Probe (WMAP) led to determining the age of the Universe. The mechanisms that drove the recombination have been discovered by using modeling of the primordial plasma and seeking agreement with the observations. The new Plank Surveyor Instrument launched in 2009 is expected to produce data about the recombination era of an unprecedented accuracy, that require including better information regarding the basic atomic physics processes into the present models. In this talk, I will review the results for various Rydberg atom - charge particle collisions and establish their relative importance during the stages of recombination era, with respect to each other and to radiative processes. Energy changing and angular momentum changing collisions with electrons and ions are considered. This work has been supported by NSF through grants to the Institute for Theoretical Atomic and Molecular Physics at Harvard Smithsonian Center for Astrophysics and to the Center for Research on Complex Networks at Texas Southern University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Timbie, Peter T.; Bunn, Emory F.
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor
2018-02-01
Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.
Past and present cosmic structure in the SDSS DR7 main sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr
2015-01-01
We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Bayesian analysis of U.S. hurricane climate
Elsner, James B.; Bossak, Brian H.
2001-01-01
Predictive climate distributions of U.S. landfalling hurricanes are estimated from observational records over the period 1851–2000. The approach is Bayesian, combining the reliable records of hurricane activity during the twentieth century with the less precise accounts of activity during the nineteenth century to produce a best estimate of the posterior distribution on the annual rates. The methodology provides a predictive distribution of future activity that serves as a climatological benchmark. Results are presented for the entire coast as well as for the Gulf Coast, Florida, and the East Coast. Statistics on the observed annual counts of U.S. hurricanes, both for the entire coast and by region, are similar within each of the three consecutive 50-yr periods beginning in 1851. However, evidence indicates that the records during the nineteenth century are less precise. Bayesian theory provides a rational approach for defining hurricane climate that uses all available information and that makes no assumption about whether the 150-yr record of hurricanes has been adequately or uniformly monitored. The analysis shows that the number of major hurricanes expected to reach the U.S. coast over the next 30 yr is 18 and the number of hurricanes expected to hit Florida is 20.
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Bayesian Atmospheric Radiative Transfer (BART) Code and Application to WASP-43b
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Cubillos, Patricio; Bowman, Oliver; Rojo, Patricio; Stemm, Madison; Lust, Nathaniel B.; Challener, Ryan; Foster, Austin James; Foster, Andrew S.; Blumenthal, Sarah D.; Bruce, Dylan
2016-01-01
We present a new open-source Bayesian radiative-transfer framework, Bayesian Atmospheric Radiative Transfer (BART, https://github.com/exosports/BART), and its application to WASP-43b. BART initializes a model for the atmospheric retrieval calculation, generates thousands of theoretical model spectra using parametrized pressure and temperature profiles and line-by-line radiative-transfer calculation, and employs a statistical package to compare the models with the observations. It consists of three self-sufficient modules available to the community under the reproducible-research license, the Thermochemical Equilibrium Abundances module (TEA, https://github.com/dzesmin/TEA, Blecic et al. 2015}, the radiative-transfer module (Transit, https://github.com/exosports/transit), and the Multi-core Markov-chain Monte Carlo statistical module (MCcubed, https://github.com/pcubillos/MCcubed, Cubillos et al. 2015). We applied BART on all available WASP-43b secondary eclipse data from the space- and ground-based observations constraining the temperature-pressure profile and molecular abundances of the dayside atmosphere of WASP-43b. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
Uncertainty Quantification of Hypothesis Testing for the Integrated Knowledge Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuellar, Leticia
2012-05-31
The Integrated Knowledge Engine (IKE) is a tool of Bayesian analysis, based on Bayesian Belief Networks or Bayesian networks for short. A Bayesian network is a graphical model (directed acyclic graph) that allows representing the probabilistic structure of many variables assuming a localized type of dependency called the Markov property. The Markov property in this instance makes any node or random variable to be independent of any non-descendant node given information about its parent. A direct consequence of this property is that it is relatively easy to incorporate new evidence and derive the appropriate consequences, which in general is notmore » an easy or feasible task. Typically we use Bayesian networks as predictive models for a small subset of the variables, either the leave nodes or the root nodes. In IKE, since most applications deal with diagnostics, we are interested in predicting the likelihood of the root nodes given new observations on any of the children nodes. The root nodes represent the various possible outcomes of the analysis, and an important problem is to determine when we have gathered enough evidence to lean toward one of these particular outcomes. This document presents criteria to decide when the evidence gathered is sufficient to draw a particular conclusion or decide in favor of a particular outcome by quantifying the uncertainty in the conclusions that are drawn from the data. The material in this document is organized as follows: Section 2 presents briefly a forensics Bayesian network, and we explore evaluating the information provided by new evidence by looking first at the posterior distribution of the nodes of interest, and then at the corresponding posterior odds ratios. Section 3 presents a third alternative: Bayes Factors. In section 4 we finalize by showing the relation between the posterior odds ratios and Bayes factors and showing examples these cases, and in section 5 we conclude by providing clear guidelines of how to use these for the type of Bayesian networks used in IKE.« less
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
The NIFTy way of Bayesian signal inference
NASA Astrophysics Data System (ADS)
Selig, Marco
2014-12-01
We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Bayesian analysis of rare events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into themore » probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.« less
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Bayesian Redshift Classification of Emission-line Galaxies with Photometric Equivalent Widths
NASA Astrophysics Data System (ADS)
Leung, Andrew S.; Acquaviva, Viviana; Gawiser, Eric; Ciardullo, Robin; Komatsu, Eiichiro; Malz, A. I.; Zeimann, Gregory R.; Bridge, Joanna S.; Drory, Niv; Feldmeier, John J.; Finkelstein, Steven L.; Gebhardt, Karl; Gronwall, Caryl; Hagen, Alex; Hill, Gary J.; Schneider, Donald P.
2017-07-01
We present a Bayesian approach to the redshift classification of emission-line galaxies when only a single emission line is detected spectroscopically. We consider the case of surveys for high-redshift Lyα-emitting galaxies (LAEs), which have traditionally been classified via an inferred rest-frame equivalent width (EW {W}{Lyα }) greater than 20 Å. Our Bayesian method relies on known prior probabilities in measured emission-line luminosity functions and EW distributions for the galaxy populations, and returns the probability that an object in question is an LAE given the characteristics observed. This approach will be directly relevant for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX), which seeks to classify ˜106 emission-line galaxies into LAEs and low-redshift [{{O}} {{II}}] emitters. For a simulated HETDEX catalog with realistic measurement noise, our Bayesian method recovers 86% of LAEs missed by the traditional {W}{Lyα } > 20 Å cutoff over 2 < z < 3, outperforming the EW cut in both contamination and incompleteness. This is due to the method’s ability to trade off between the two types of binary classification error by adjusting the stringency of the probability requirement for classifying an observed object as an LAE. In our simulations of HETDEX, this method reduces the uncertainty in cosmological distance measurements by 14% with respect to the EW cut, equivalent to recovering 29% more cosmological information. Rather than using binary object labels, this method enables the use of classification probabilities in large-scale structure analyses. It can be applied to narrowband emission-line surveys as well as upcoming large spectroscopic surveys including Euclid and WFIRST.
Evidence cross-validation and Bayesian inference of MAST plasma equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nessi, G. T. von; Hole, M. J.; Svensson, J.
2012-01-15
In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlledmore » Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++[Appel et al., ''A unified approach to equilibrium reconstruction'' Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
Yen, A M-F; Liou, H-H; Lin, H-L; Chen, T H-H
2006-01-01
The study aimed to develop a predictive model to deal with data fraught with heterogeneity that cannot be explained by sampling variation or measured covariates. The random-effect Poisson regression model was first proposed to deal with over-dispersion for data fraught with heterogeneity after making allowance for measured covariates. Bayesian acyclic graphic model in conjunction with Markov Chain Monte Carlo (MCMC) technique was then applied to estimate the parameters of both relevant covariates and random effect. Predictive distribution was then generated to compare the predicted with the observed for the Bayesian model with and without random effect. Data from repeated measurement of episodes among 44 patients with intractable epilepsy were used as an illustration. The application of Poisson regression without taking heterogeneity into account to epilepsy data yielded a large value of heterogeneity (heterogeneity factor = 17.90, deviance = 1485, degree of freedom (df) = 83). After taking the random effect into account, the value of heterogeneity factor was greatly reduced (heterogeneity factor = 0.52, deviance = 42.5, df = 81). The Pearson chi2 for the comparison between the expected seizure frequencies and the observed ones at two and three months of the model with and without random effect were 34.27 (p = 1.00) and 1799.90 (p < 0.0001), respectively. The Bayesian acyclic model using the MCMC method was demonstrated to have great potential for disease prediction while data show over-dispersion attributed either to correlated property or to subject-to-subject variability.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
ESS++: a C++ objected-oriented algorithm for Bayesian stochastic search model exploration
Bottolo, Leonardo; Langley, Sarah R.; Petretto, Enrico; Tiret, Laurence; Tregouet, David; Richardson, Sylvia
2011-01-01
Summary: ESS++ is a C++ implementation of a fully Bayesian variable selection approach for single and multiple response linear regression. ESS++ works well both when the number of observations is larger than the number of predictors and in the ‘large p, small n’ case. In the current version, ESS++ can handle several hundred observations, thousands of predictors and a few responses simultaneously. The core engine of ESS++ for the selection of relevant predictors is based on Evolutionary Monte Carlo. Our implementation is open source, allowing community-based alterations and improvements. Availability: C++ source code and documentation including compilation instructions are available under GNU licence at http://bgx.org.uk/software/ESS.html. Contact: l.bottolo@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21233165
Application of Bayesian a Priori Distributions for Vehicles' Video Tracking Systems
NASA Astrophysics Data System (ADS)
Mazurek, Przemysław; Okarma, Krzysztof
Intelligent Transportation Systems (ITS) helps to improve the quality and quantity of many car traffic parameters. The use of the ITS is possible when the adequate measuring infrastructure is available. Video systems allow for its implementation with relatively low cost due to the possibility of simultaneous video recording of a few lanes of the road at a considerable distance from the camera. The process of tracking can be realized through different algorithms, the most attractive algorithms are Bayesian, because they use the a priori information derived from previous observations or known limitations. Use of this information is crucial for improving the quality of tracking especially for difficult observability conditions, which occur in the video systems under the influence of: smog, fog, rain, snow and poor lighting conditions.
NASA Astrophysics Data System (ADS)
Kumari, K.; Oberheide, J.
2017-12-01
Nonmigrating tidal diagnostics of SABER temperature observations in the ionospheric dynamo region reveal a large amount of variability on time-scales of a few days to weeks. In this paper, we discuss the physical reasons for the observed short-term tidal variability using a novel approach based on Information theory and Bayesian statistics. We diagnose short-term tidal variability as a function of season, QBO, ENSO, and solar cycle and other drivers using time dependent probability density functions, Shannon entropy and Kullback-Leibler divergence. The statistical significance of the approach and its predictive capability is exemplified using SABER tidal diagnostics with emphasis on the responses to the QBO and solar cycle. Implications for F-region plasma density will be discussed.
Bayesian Inference of High-Dimensional Dynamical Ocean Models
NASA Astrophysics Data System (ADS)
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
2010-01-01
Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353
Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang
2013-01-01
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
NASA Astrophysics Data System (ADS)
Bowman, C.; Gibson, K. J.; La Haye, R. J.; Groebner, R. J.; Taylor, N. Z.; Grierson, B. A.
2014-10-01
A Bayesian inference framework has been developed for the DIII-D charge-exchange recombination (CER) system, capable of computing probability distribution functions (PDFs) for desired parameters. CER is a key diagnostic system at DIII-D, measuring important physics parameters such as plasma rotation and impurity ion temperature. This work is motivated by a case in which the CER system was used to probe the plasma rotation radial profile around an m/n = 2/1 tearing mode island rotating at ~ 1 kHz. Due to limited resolution in the tearing mode phase and short integration time, it has proven challenging to observe the structure of the rotation profile across the island. We seek to solve this problem by using the Bayesian framework to improve the estimation accuracy of the plasma rotation, helping to reveal details of how it is perturbed in the magnetic island vicinity. Examples of the PDFs obtained through the Bayesian framework will be presented, and compared with results from a conventional least-squares analysis of the CER data. Work supported by the US DOE under DE-FC02-04ER54698 and DE-AC02-09CH11466.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Multipole Vector Anomalies in the First-Year WMAP Data: A Cut-Sky Analysis
NASA Astrophysics Data System (ADS)
Bielewicz, P.; Eriksen, H. K.; Banday, A. J.; Górski, K. M.; Lilje, P. B.
2005-12-01
We apply the recently defined multipole vector framework to the frequency-specific first-year WMAP sky maps, estimating the low-l multipole coefficients from the high-latitude sky by means of a power equalization filter. While most previous analyses of this type have considered only heavily processed (and foreground-contaminated) full-sky maps, the present approach allows for greater control of residual foregrounds and therefore potentially also for cosmologically important conclusions. The low-l spherical harmonic coefficients and corresponding multipole vectors are tabulated for easy reference. Using this formalism, we reassess a set of earlier claims of both cosmological and noncosmological low-l correlations on the basis of multipole vectors. First, we show that the apparent l=3 and 8 correlation claimed by Copi and coworkers is present only in the heavily processed map produced by Tegmark and coworkers and must therefore be considered an artifact of that map. Second, the well-known quadrupole-octopole correlation is confirmed at the 99% significance level and shown to be robust with respect to frequency and sky cut. Previous claims are thus supported by our analysis. Finally, the low-l alignment with respect to the ecliptic claimed by Schwarz and coworkers is nominally confirmed in this analysis, but also shown to be very dependent on severe a posteriori choices. Indeed, we show that given the peculiar quadrupole-octopole arrangement, finding such a strong alignment with the ecliptic is not unusual.
Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.
Houpt, Joseph W; Bittner, Jennifer L
2018-07-01
Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less
Galactic foreground contributions to the 5-year Wilkinson Microwave Anisotropy Probe maps
NASA Astrophysics Data System (ADS)
Macellari, N.; Pierpaoli, E.; Dickinson, C.; Vaillancourt, J. E.
2011-12-01
We compute the cross-correlation between intensity and polarization from the 5-year Wilkinson Microwave Anisotropy Probe (WMAP5) data in different sky regions with respect to template maps for synchrotron, dust and free-free emission. We derive the frequency dependence and polarization fraction for all three components in 48 different sky regions of HEALPIX (Nside= 2) pixelization. The anomalous emission associated with dust is clearly detected in intensity over the entire sky at the K (23-GHz) and Ka (33-GHz) WMAP bands, and is found to be the dominant foreground at low Galactic latitudes, between b =-40° and +10°. The synchrotron spectral index obtained from the K and Ka WMAP bands from an all-sky analysis is βs=-3.32 ± 0.12 for intensity and βs=-3.01 ± 0.03 for polarized intensity. The polarization fraction of the synchrotron emission is constant in frequency and increases with latitude from ≈5 per cent near the Galactic plane up to ≈40 per cent in some regions at high latitudes; the average value for |b| < 20° is 8.6 ± 1.7 (stat) ± 0.5 (sys) per cent, while for |b| > 20°, it is 19.3 ± 0.8 (stat) ± 0.5 (sys) per cent. Anomalous dust and free-free emissions appear to be relatively unpolarized. Monte Carlo simulations showed that there were biases of the method due to cross-talk between the components, at up to ≈5 per cent in any given pixel, and ≈1.5 per cent on average, when the true polarization fraction is low (a few per cent or less). Nevertheless, the average polarization fraction of dust-correlated emission at the K band is 3.2 ± 0.9 (stat) ± 1.5 (sys) per cent or less than 5 per cent at 95 per cent confidence. When comparing real data with simulations, eight regions show a detected polarization above the 99th percentile of the distribution from simulations with no input foreground polarization, six of which are detected at above 2σ and display polarization fractions between 2.6 and 7.2 per cent, except for one anomalous region, which has 32 ± 12 per cent. The dust polarization values are consistent with the expectation from spinning dust emission, but polarized dust emission from magnetic-dipole radiation cannot be ruled out. Free-free emission was found to be unpolarized with an upper limit of 3.4 per cent at 95 per cent confidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Destri, C.; Vega, H. J. de; Observatoire de Paris, LERMA, Laboratoire Associe au CNRS UMR 8112, 61, Avenue de l'Observatoire, 75014 Paris
Generically, the classical evolution of the inflaton has a brief fast-roll stage that precedes the slow-roll regime. The fast-roll stage leads to a purely attractive potential in the wave equations of curvature and tensor perturbations (while the potential is purely repulsive in the slow-roll stage). This attractive potential leads to a depression of the CMB quadrupole moment for the curvature and B-mode angular power spectra. A single new parameter emerges in this way in the early universe model: the comoving wave number k{sub 1} characteristic scale of this attractive potential. This mode k{sub 1} happens to exit the horizon preciselymore » at the transition from the fast-roll to the slow-roll stage. The fast-roll stage dynamically modifies the initial power spectrum by a transfer function D(k). We compute D(k) by solving the inflaton evolution equations. D(k) effectively suppresses the primordial power for k
Probabilistic estimation of dune retreat on the Gold Coast, Australia
Palmsten, Margaret L.; Splinter, Kristen D.; Plant, Nathaniel G.; Stockdon, Hilary F.
2014-01-01
Sand dunes are an important natural buffer between storm impacts and development backing the beach on the Gold Coast of Queensland, Australia. The ability to forecast dune erosion at a prediction horizon of days to a week would allow efficient and timely response to dune erosion in this highly populated area. Towards this goal, we modified an existing probabilistic dune erosion model for use on the Gold Coast. The original model was trained using observations of dune response from Hurricane Ivan on Santa Rosa Island, Florida, USA (Plant and Stockdon 2012. Probabilistic prediction of barrier-island response to hurricanes, Journal of Geophysical Research, 117(F3), F03015). The model relates dune position change to pre-storm dune elevations, dune widths, and beach widths, along with storm surge and run-up using a Bayesian network. The Bayesian approach captures the uncertainty of inputs and predictions through the conditional probabilities between variables. Three versions of the barrier island response Bayesian network were tested for use on the Gold Coast. One network has the same structure as the original and was trained with the Santa Rosa Island data. The second network has a modified design and was trained using only pre- and post-storm data from 1988-2009 for the Gold Coast. The third version of the network has the same design as the second version of the network and was trained with the combined data from the Gold Coast and Santa Rosa Island. The two networks modified for use on the Gold Coast hindcast dune retreat with equal accuracy. Both networks explained 60% of the observed dune retreat variance, which is comparable to the skill observed by Plant and Stockdon (2012) in the initial Bayesian network application at Santa Rosa Island. The new networks improved predictions relative to application of the original network on the Gold Coast. Dune width was the most important morphologic variable in hindcasting dune retreat, while hydrodynamic variables, surge and run-up elevation, were also important
Rabelo, Cleverton Correa; Feres, Magda; Gonçalves, Cristiane; Figueiredo, Luciene C; Faveri, Marcelo; Tu, Yu-Kang; Chambrone, Leandro
2015-07-01
The aim of this study was to assess the effect of systemic antibiotic therapy on the treatment of aggressive periodontitis (AgP). This study was conducted and reported in accordance with the PRISMA statement. The MEDLINE, EMBASE and CENTRAL databases were searched up to June 2014 for randomized clinical trials comparing the treatment of subjects with AgP with either scaling and root planing (SRP) alone or associated with systemic antibiotics. Bayesian network meta-analysis was prepared using the Bayesian random-effects hierarchical models and the outcomes reported at 6-month post-treatment. Out of 350 papers identified, 14 studies were eligible. Greater gain in clinical attachment (CA) (mean difference [MD]: 1.08 mm; p < 0.0001) and reduction in probing depth (PD) (MD: 1.05 mm; p < 0.00001) were observed for SRP + metronidazole (Mtz), and for SRP + Mtz + amoxicillin (Amx) (MD: 0.45 mm, MD: 0.53 mm, respectively; p < 0.00001) than SRP alone/placebo. Bayesian network meta-analysis showed additional benefits in CA gain and PD reduction when SRP was associated with systemic antibiotics. SRP plus systemic antibiotics led to an additional clinical effect compared with SRP alone in the treatment of AgP. Of the antibiotic protocols available for inclusion into the Bayesian network meta-analysis, Mtz and Mtz/Amx provided to the most beneficial outcomes. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli
2018-01-01
Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Weiland, J. L.; Hill, R. S.; Odegard, N.; Larson, D.; Bennett, C. L.; Dunkley, J.; Gold, B.; Greason, M. R.; Jarosik, N.;
2010-01-01
We present new full-sky temperature and polarization maps in five frequency bands from 23 to 94 GHz, based on data from the first five years of the Wilkinson Microwave Anisotropy Probe (WMAP) sky survey. The new maps are consistent with previous maps and are more sensitive. The five-year maps incorporate several improvements in data processing made possible by the additional years of data and by a more complete analysis of the instrument calibration and in-flight beam response. We present several new tests for systematic errors in the polarization data and conclude that W-band polarization data is not yet suitable for cosmological studies, but we suggest directions for further study. We do find that Ka-band data is suitable for use; in conjunction with the additional years of data, the addition of Ka band to the previously used Q- and V-band channels significantly reduces the uncertainty in the optical depth parameter, tau. Further scientific results from the five-year data analysis are presented in six companion papers and are summarized in Section 7 of this paper. With the five-year WMAP data, we detect no convincing deviations from the minimal six-parameter ACDM model: a flat universe dominated by a cosmological constant, with adiabatic and nearly scale-invariant Gaussian fluctuations. Using WMAP data combined with measurements of Type Ia supernovae and Baryon Acoustic Oscillations in the galaxy distribution, we find (68% CL uncertainties): OMEGA(sub b)h(sup 2) = 0.02267(sup +0.00058)(sub -0.00059), OMEGA(sub c)h(sup 2) = 0.1131 plus or minus 0.0034, OMEGA(sub logical and) = 0.726 plus or minus 0.015, ns = .960 plus or minus 0.013, tau = 0.84 plus or minus 0.016, and DELTA(sup 2)(sub R) = (22.445 plus or minus 0.096) x 10(exp -9) at k = 0.002 Mpc(exp -1). From these we derive sigma(sub 8) = 0.812 plus or minus 0.026, H(sub 0) = 70.5 plus or minus 1.3 kilometers per second Mpc(exp -1), OMEGA(sub b) = 0.0456 plus or minus 0.0015, OMEGA(sub c) = .228 plus or minus 0.013, OMEGA(sub m)h(sup 2) = 0.1358(sup +0.0037)(sub -0.0036), z reion = 10.9 plus or minus 1.4, and t(sub 0) = 13.72 plus or minus 0.12 Gyr. The new limit on the tensor-to-scalar ration is r less than 0.22 (95% CL), while the evidence for a running spectral index is insignificant, dn(sub s)/d ln k = -0.028 plus or minus 0.020 (68% CL). We obtain tight, simultaneous limits on the (constant) dark energy equation of state and the spatial curvature of the universe: -0.14 less than 1 + w less than 0.12 (95% CL) and -0.0179 less than OMEGA(sub k) less than 0.0081 (95% CL). The number of relativistic degrees of freedom, expressed in units of the effective number of neutrino species, is found to be N(sub eff) = 4.4 plus or minus 1.5 (69% CL), consistent with the standard value of 3.04. Models with N(sub eff) = 0 are disfavored at greater than 99% confidence. Finally, new limits on physically motivated primordial non-Gaussianity parameters are -9 less than f(sup local)(sub NL) less than 111 (95% CL) and -151 less than f(sup equal)(sub NL) less than 253 (95% CL) for the local and equilateral models, respectively.
NASA Astrophysics Data System (ADS)
Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.
2017-12-01
Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby increasing the information at that site. These different inversions—event-level and interpolated data, higher and lower resolutions—are compared using an ensemble of descriptive and comparative statistics. Analyzing the sensitivity of the inverse model leads to more accurate estimates of the methane source category uncertainty.
A Bayesian approach for convex combination of two Gumbel-Barnett copulas
NASA Astrophysics Data System (ADS)
Fernández, M.; González-López, V. A.
2013-10-01
In this paper it was applied a new Bayesian approach to model the dependence between two variables of interest in public policy: "Gonorrhea Rates per 100,000 Population" and "400% Federal Poverty Level and over" with a small number of paired observations (one pair for each U.S. state). We use a mixture of Gumbel-Barnett copulas suitable to represent situations with weak and negative dependence, which is the case treated here. The methodology allows even making a prediction of the dependence between the variables from one year to another, showing whether there was any alteration in the dependence.
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
Bayesian data analysis for newcomers.
Kruschke, John K; Liddell, Torrin M
2018-02-01
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.
Microwave Sky image from the WMAP Mission
NASA Technical Reports Server (NTRS)
2005-01-01
A detailed full-sky map of the oldest light in the universe. It is a 'baby picture' of the universe. Colors indicate 'warmer' (red) and 'cooler' (blue) spots. The oval shape is a projection to display the whole sky; similar to the way the globe of the earth can be projected as an oval. The microwave light captured in this picture is from 379,000 years after the Big Bang, over 13 billion years ago. For more information, see http://map.gsfc.nasa.gov/m_mm/mr_whatsthat.html
Anisotropic universe with magnetized dark energy
NASA Astrophysics Data System (ADS)
Goswami, G. K.; Dewangan, R. N.; Yadav, Anil Kumar
2016-04-01
In the present work we have searched the existence of the late time acceleration of the Universe filled with cosmic fluid and uniform magnetic field as source of matter in anisotropic Heckmann-Schucking space-time. The observed acceleration of universe has been explained by introducing a positive cosmological constant Λ in the Einstein's field equation which is mathematically equivalent to vacuum energy with equation of state (EOS) parameter set equal to -1. The present values of the matter and the dark energy parameters (Ωm)0 & (Ω_{Λ})0 are estimated in view of the latest 287 high red shift (0.3 ≤ z ≤1.4) SN Ia supernova data's of observed apparent magnitude along with their possible error taken from Union 2.1 compilation. It is found that the best fit value for (Ωm)0 & (Ω_{Λ})0 are 0.2820 & 0.7177 respectively which are in good agreement with recent astrophysical observations in the latest surveys like WMAP [2001-2013], Planck [latest 2015] & BOSS. Various physical parameters such as the matter and dark energy densities, the present age of the universe and deceleration parameter have been obtained on the basis of the values of (Ωm)0 & (Ω_{Λ})0. Also we have estimated that the acceleration would have begun in the past at z = 0.71131 ˜6.2334 Gyrs before from present.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
Nonlinear dynamical modes of climate variability: from curves to manifolds
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2016-04-01
The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510
Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model
NASA Astrophysics Data System (ADS)
Wawrzynczak, A.; Kopka, P.
2017-12-01
Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu Yiping; Bolton, Adam S.; Dawson, Kyle S.
2012-04-15
We present a hierarchical Bayesian determination of the velocity-dispersion function of approximately 430,000 massive luminous red galaxies observed at relatively low spectroscopic signal-to-noise ratio (S/N {approx} 3-5 per 69 km s{sup -1}) by the Baryon Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey III. We marginalize over spectroscopic redshift errors, and use the full velocity-dispersion likelihood function for each galaxy to make a self-consistent determination of the velocity-dispersion distribution parameters as a function of absolute magnitude and redshift, correcting as well for the effects of broadband magnitude errors on our binning. Parameterizing the distribution at each point inmore » the luminosity-redshift plane with a log-normal form, we detect significant evolution in the width of the distribution toward higher intrinsic scatter at higher redshifts. Using a subset of deep re-observations of BOSS galaxies, we demonstrate that our distribution-parameter estimates are unbiased regardless of spectroscopic S/N. We also show through simulation that our method introduces no systematic parameter bias with redshift. We highlight the advantage of the hierarchical Bayesian method over frequentist 'stacking' of spectra, and illustrate how our measured distribution parameters can be adopted as informative priors for velocity-dispersion measurements from individual noisy spectra.« less
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
A Bayesian blind survey for cold molecular gas in the Universe
NASA Astrophysics Data System (ADS)
Lentati, L.; Carilli, C.; Alexander, P.; Walter, F.; Decarli, R.
2014-10-01
A new Bayesian method for performing an image domain search for line-emitting galaxies is presented. The method uses both spatial and spectral information to robustly determine the source properties, employing either simple Gaussian, or other physically motivated models whilst using the evidence to determine the probability that the source is real. In this paper, we describe the method, and its application to both a simulated data set, and a blind survey for cold molecular gas using observations of the Hubble Deep Field-North taken with the Plateau de Bure Interferometer. We make a total of six robust detections in the survey, five of which have counterparts in other observing bands. We identify the most secure detections found in a previous investigation, while finding one new probable line source with an optical ID not seen in the previous analysis. This study acts as a pilot application of Bayesian statistics to future searches to be carried out both for low-J CO transitions of high-redshift galaxies using the Jansky Very Large Array (JVLA), and at millimetre wavelengths with Atacama Large Millimeter/submillimeter Array (ALMA), enabling the inference of robust scientific conclusions about the history of the molecular gas properties of star-forming galaxies in the Universe through cosmic time.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
NASA Astrophysics Data System (ADS)
Goodlet, Brent R.; Mills, Leah; Bales, Ben; Charpagne, Marie-Agathe; Murray, Sean P.; Lenthe, William C.; Petzold, Linda; Pollock, Tresa M.
2018-06-01
Bayesian inference is employed to precisely evaluate single crystal elastic properties of novel γ -γ ' Co- and CoNi-based superalloys from simple and non-destructive resonant ultrasound spectroscopy (RUS) measurements. Nine alloys from three Co-, CoNi-, and Ni-based alloy classes were evaluated in the fully aged condition, with one alloy per class also evaluated in the solution heat-treated condition. Comparisons are made between the elastic properties of the three alloy classes and among the alloys of a single class, with the following trends observed. A monotonic rise in the c_{44} (shear) elastic constant by a total of 12 pct is observed between the three alloy classes as Co is substituted for Ni. Elastic anisotropy ( A) is also increased, with a large majority of the nearly 13 pct increase occurring after Co becomes the dominant constituent. Together the five CoNi alloys, with Co:Ni ratios from 1:1 to 1.5:1, exhibited remarkably similar properties with an average A 1.8 pct greater than the Ni-based alloy CMSX-4. Custom code demonstrating a substantial advance over previously reported methods for RUS inversion is also reported here for the first time. CmdStan-RUS is built upon the open-source probabilistic programing language of Stan and formulates the inverse problem using Bayesian methods. Bayesian posterior distributions are efficiently computed with Hamiltonian Monte Carlo (HMC), while initial parameterization is randomly generated from weakly informative prior distributions. Remarkably robust convergence behavior is demonstrated across multiple independent HMC chains in spite of initial parameterization often very far from actual parameter values. Experimental procedures are substantially simplified by allowing any arbitrary misorientation between the specimen and crystal axes, as elastic properties and misorientation are estimated simultaneously.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
Potential of SNP markers for the characterization of Brazilian cassava germplasm.
de Oliveira, Eder Jorge; Ferreira, Cláudia Fortes; da Silva Santos, Vanderlei; de Jesus, Onildo Nunes; Oliveira, Gilmara Alvarenga Fachardo; da Silva, Maiane Suzarte
2014-06-01
High-throughput markers, such as SNPs, along with different methodologies were used to evaluate the applicability of the Bayesian approach and the multivariate analysis in structuring the genetic diversity in cassavas. The objective of the present work was to evaluate the diversity and genetic structure of the largest cassava germplasm bank in Brazil. Complementary methodological approaches such as discriminant analysis of principal components (DAPC), Bayesian analysis and molecular analysis of variance (AMOVA) were used to understand the structure and diversity of 1,280 accessions genotyped using 402 single nucleotide polymorphism markers. The genetic diversity (0.327) and the average observed heterozygosity (0.322) were high considering the bi-allelic markers. In terms of population, the presence of a complex genetic structure was observed indicating the formation of 30 clusters by DAPC and 34 clusters by Bayesian analysis. Both methodologies presented difficulties and controversies in terms of the allocation of some accessions to specific clusters. However, the clusters suggested by the DAPC analysis seemed to be more consistent for presenting higher probability of allocation of the accessions within the clusters. Prior information related to breeding patterns and geographic origins of the accessions were not sufficient for providing clear differentiation between the clusters according to the AMOVA analysis. In contrast, the F ST was maximized when considering the clusters suggested by the Bayesian and DAPC analyses. The high frequency of germplasm exchange between producers and the subsequent alteration of the name of the same material may be one of the causes of the low association between genetic diversity and geographic origin. The results of this study may benefit cassava germplasm conservation programs, and contribute to the maximization of genetic gains in breeding programs.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Walters, Kevin
2012-08-07
In this paper we use approximate Bayesian computation to estimate the parameters in an immortal model of colonic stem cell division. We base the inferences on the observed DNA methylation patterns of cells sampled from the human colon. Utilising DNA methylation patterns as a form of molecular clock is an emerging area of research and has been used in several studies investigating colonic stem cell turnover. There is much debate concerning the two competing models of stem cell turnover: the symmetric (immortal) and asymmetric models. Early simulation studies concluded that the observed methylation data were not consistent with the immortal model. A later modified version of the immortal model that included preferential strand segregation was subsequently shown to be consistent with the same methylation data. Most of this earlier work assumes site independent methylation models that do not take account of the known processivity of methyltransferases whilst other work does not take into account the methylation errors that occur in differentiated cells. This paper addresses both of these issues for the immortal model and demonstrates that approximate Bayesian computation provides accurate estimates of the parameters in this neighbour-dependent model of methylation error rates. The results indicate that if colonic stem cells divide asymmetrically then colon stem cell niches are maintained by more than 8 stem cells. Results also indicate the possibility of preferential strand segregation and provide clear evidence against a site-independent model for methylation errors. In addition, algebraic expressions for some of the summary statistics used in the approximate Bayesian computation (that allow for the additional variation arising from cell division in differentiated cells) are derived and their utility discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Bayesian Estimates of Autocorrelations in Single-Case Designs
ERIC Educational Resources Information Center
Shadish, William R.; Rindskopf, David M.; Hedges, Larry V.; Sullivan, Kristynn J.
2012-01-01
Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to…
The Development of Bayesian Theory and Its Applications in Business and Bioinformatics
NASA Astrophysics Data System (ADS)
Zhang, Yifei
2018-03-01
Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.
Bayesian demography 250 years after Bayes
Bijak, Jakub; Bryant, John
2016-01-01
Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889
A Bayesian framework for knowledge attribution: evidence from semantic integration.
Powell, Derek; Horne, Zachary; Pinillos, N Ángel; Holyoak, Keith J
2015-06-01
We propose a Bayesian framework for the attribution of knowledge, and apply this framework to generate novel predictions about knowledge attribution for different types of "Gettier cases", in which an agent is led to a justified true belief yet has made erroneous assumptions. We tested these predictions using a paradigm based on semantic integration. We coded the frequencies with which participants falsely recalled the word "thought" as "knew" (or a near synonym), yielding an implicit measure of conceptual activation. Our experiments confirmed the predictions of our Bayesian account of knowledge attribution across three experiments. We found that Gettier cases due to counterfeit objects were not treated as knowledge (Experiment 1), but those due to intentionally-replaced evidence were (Experiment 2). Our findings are not well explained by an alternative account focused only on luck, because accidentally-replaced evidence activated the knowledge concept more strongly than did similar false belief cases (Experiment 3). We observed a consistent pattern of results across a number of different vignettes that varied the quality and type of evidence available to agents, the relative stakes involved, and surface details of content. Accordingly, the present findings establish basic phenomena surrounding people's knowledge attributions in Gettier cases, and provide explanations of these phenomena within a Bayesian framework. Copyright © 2015 Elsevier B.V. All rights reserved.
Baldacchino, Tara; Jacobs, William R; Anderson, Sean R; Worden, Keith; Rowson, Jennifer
2018-01-01
This contribution presents a novel methodology for myolectric-based control using surface electromyographic (sEMG) signals recorded during finger movements. A multivariate Bayesian mixture of experts (MoE) model is introduced which provides a powerful method for modeling force regression at the fingertips, while also performing finger movement classification as a by-product of the modeling algorithm. Bayesian inference of the model allows uncertainties to be naturally incorporated into the model structure. This method is tested using data from the publicly released NinaPro database which consists of sEMG recordings for 6 degree-of-freedom force activations for 40 intact subjects. The results demonstrate that the MoE model achieves similar performance compared to the benchmark set by the authors of NinaPro for finger force regression. Additionally, inherent to the Bayesian framework is the inclusion of uncertainty in the model parameters, naturally providing confidence bounds on the force regression predictions. Furthermore, the integrated clustering step allows a detailed investigation into classification of the finger movements, without incurring any extra computational effort. Subsequently, a systematic approach to assessing the importance of the number of electrodes needed for accurate control is performed via sensitivity analysis techniques. A slight degradation in regression performance is observed for a reduced number of electrodes, while classification performance is unaffected.
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.
2009-05-01
Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.
Stoffenmanager exposure model: company-specific exposure assessments using a Bayesian methodology.
van de Ven, Peter; Fransman, Wouter; Schinkel, Jody; Rubingh, Carina; Warren, Nicholas; Tielemans, Erik
2010-04-01
The web-based tool "Stoffenmanager" was initially developed to assist small- and medium-sized enterprises in the Netherlands to make qualitative risk assessments and to provide advice on control at the workplace. The tool uses a mechanistic model to arrive at a "Stoffenmanager score" for exposure. In a recent study it was shown that variability in exposure measurements given a certain Stoffenmanager score is still substantial. This article discusses an extension to the tool that uses a Bayesian methodology for quantitative workplace/scenario-specific exposure assessment. This methodology allows for real exposure data observed in the company of interest to be combined with the prior estimate (based on the Stoffenmanager model). The output of the tool is a company-specific assessment of exposure levels for a scenario for which data is available. The Bayesian approach provides a transparent way of synthesizing different types of information and is especially preferred in situations where available data is sparse, as is often the case in small- and medium sized-enterprises. Real-world examples as well as simulation studies were used to assess how different parameters such as sample size, difference between prior and data, uncertainty in prior, and variance in the data affect the eventual posterior distribution of a Bayesian exposure assessment.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D
2013-01-01
Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.
Fermi LAT Observation of Centaurus a Radio Galaxy
NASA Astrophysics Data System (ADS)
Sahakyan, N. V.
2013-01-01
The results of analysis of approximately 3 year gamma-ray observations (August 2008-July 2011) of the core of radio galaxy Centaurus A with the Fermi Large Area Telescope (Fermi LAT) are presented. Binned likelihood analysis method applying to the data shows that below several GeV the spectrum can be described by a single power-law with photon index Γ = 2.73 ± 0.06. However, at higher energies the new data show significant excess above the extrapolation of the energy spectrum from low energies. The comparison of the corresponding Spectral Energy Distribution (SED) at GeV energies with the SED in the TeV energy band reported by the H.E.S.S. collaboration shows that we deal with two or perhaps even three components of gamma-radiation originating from different regions located within the central 10 kpc of Centaurus A. The analysis of gamma-ray data of Centaurus A lobe accumulated from the beginning of the operation until November 14, 2011 show extension of the HE gamma-ray emission beyond the WMAP radio image in the case of the Northern lobe [9]. The possible origins of gamma-rays from giant radio lobes of Centaurus A are discussed in the context of hadronic and leptonic scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yang; Li, Si-Yu; Li, Yong-Ping
The study of reionization history plays an important role in understanding the evolution of our universe. It is commonly believed that the intergalactic medium (IGM) in our universe are fully ionized today, however the reionizing process remains to be mysterious. A simple instantaneous reionization process is usually adopted in modern cosmology without direct observational evidence. However, the history of ionization fraction, x{sub e}(z) will influence CMB observables and constraints on optical depth τ. With the mocked future data sets based on featured reionization model, we find the bias on τ introduced by instantaneous model can not be neglected. In thismore » paper, we study the cosmic reionization history in a model independent way, the so called principle component analysis (PCA) method, and reconstruct x{sub e} (z) at different redshift z with the data sets of Planck, WMAP 9 years temperature and polarization power spectra, combining with the baryon acoustic oscillation (BAO) from galaxy survey and type Ia supernovae (SN) Union 2.1 sample respectively. The results show that reconstructed x{sub e}(z) is consistent with instantaneous behavior, however, there exists slight deviation from this behavior at some epoch. With PCA method, after abandoning the noisy modes, we get stronger constraints, and the hints for featured x{sub e}(z) evolution could become a little more obvious.« less
Bayesian Inversion of 2D Models from Airborne Transient EM Data
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Key, K.; Ray, A.
2016-12-01
The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.
Morales, Dinora Araceli; Bengoetxea, Endika; Larrañaga, Pedro; García, Miguel; Franco, Yosu; Fresnada, Mónica; Merino, Marisa
2008-05-01
In vitro fertilization (IVF) is a medically assisted reproduction technique that enables infertile couples to achieve successful pregnancy. Given the uncertainty of the treatment, we propose an intelligent decision support system based on supervised classification by Bayesian classifiers to aid to the selection of the most promising embryos that will form the batch to be transferred to the woman's uterus. The aim of the supervised classification system is to improve overall success rate of each IVF treatment in which a batch of embryos is transferred each time, where the success is achieved when implantation (i.e. pregnancy) is obtained. Due to ethical reasons, different legislative restrictions apply in every country on this technique. In Spain, legislation allows a maximum of three embryos to form each transfer batch. As a result, clinicians prefer to select the embryos by non-invasive embryo examination based on simple methods and observation focused on morphology and dynamics of embryo development after fertilization. This paper proposes the application of Bayesian classifiers to this embryo selection problem in order to provide a decision support system that allows a more accurate selection than with the actual procedures which fully rely on the expertise and experience of embryologists. For this, we propose to take into consideration a reduced subset of feature variables related to embryo morphology and clinical data of patients, and from this data to induce Bayesian classification models. Results obtained applying a filter technique to choose the subset of variables, and the performance of Bayesian classifiers using them, are presented.
Tipping point analysis of atmospheric oxygen concentration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livina, V. N.; Forbes, A. B.; Vaz Martins, T. M.
2015-03-15
We apply tipping point analysis to nine observational oxygen concentration records around the globe, analyse their dynamics and perform projections under possible future scenarios, leading to oxygen deficiency in the atmosphere. The analysis is based on statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the observed data using Bayesian and wavelet techniques.
Predicting ICU mortality: a comparison of stationary and nonstationary temporal models.
Kayaalp, M.; Cooper, G. F.; Clermont, G.
2000-01-01
OBJECTIVE: This study evaluates the effectiveness of the stationarity assumption in predicting the mortality of intensive care unit (ICU) patients at the ICU discharge. DESIGN: This is a comparative study. A stationary temporal Bayesian network learned from data was compared to a set of (33) nonstationary temporal Bayesian networks learned from data. A process observed as a sequence of events is stationary if its stochastic properties stay the same when the sequence is shifted in a positive or negative direction by a constant time parameter. The temporal Bayesian networks forecast mortalities of patients, where each patient has one record per day. The predictive performance of the stationary model is compared with nonstationary models using the area under the receiver operating characteristics (ROC) curves. RESULTS: The stationary model usually performed best. However, one nonstationary model using large data sets performed significantly better than the stationary model. CONCLUSION: Results suggest that using a combination of stationary and nonstationary models may predict better than using either alone. PMID:11079917
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
NASA Technical Reports Server (NTRS)
Chuss, David
2010-01-01
The Cosmic Microwave Background (CMB) has provided a wealth of information about the history and physics of the early Universe. Much progress has been made on uncovering the emerging Standard Model of Cosmology by such experiments as COBE and WMAP, and ESA's Planck Surveyor will likely increase our knowledge even more. Despite the success of this model, mysteries remain. Currently understood physics does not offer a compelling explanation for the homogeneity, flatness, and the origin of structure in the Universe. Cosmic Inflation, a brief epoch of exponential expansion, has been posted to explain these observations. If inflation is a reality, it is expected to produce a background spectrum of gravitational waves that will leave a small polarized imprint on the CMB. Discovery of this signal would give the first direct evidence for inflation and provide a window into physics at scales beyond those accessible to terrestrial particle accelerators. I will briefly review aspects of the Standard Model of Cosmology and discuss our current efforts to design and deploy experiments to measure the polarization of the CMB with the precision required to test inflation.
The Cosmic Microwave Background Radiation - A Unique Window on the Early Universe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2009-01-01
The cosmic microwave background radiation is the remnant heat from the Big Bang. It provides us with a unique probe of conditions in the early universe, long before any organized structures had yet formed. The anisotropy in the radiation's brightness yields important clues about primordial structure and additionally provides a wealth of information about the physics of the early universe. Within the framework of inflationary dark matter models, observations of the anisotropy on sub-degree angular scales reveals the signatures of acoustic oscillations of the photon-baryon fluid at a redshift of approx. 1100. Data from the first five years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature and polarization anisotropy. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time.
Goddard's Astrophysics Science Division Annual Report 2011
NASA Technical Reports Server (NTRS)
Centrella, Joan; Reddy, Francis; Tyler, Pat
2012-01-01
The Astrophysics Science Division(ASD) at Goddard Space Flight Center(GSFC)is one of the largest and most diverse astrophysical organizations in the world, with activities spanning a broad range of topics in theory, observation, and mission and technology development. Scientific research is carried out over the entire electromagnetic spectrum from gamma rays to radiowavelengths as well as particle physics and gravitational radiation. Members of ASD also provide the scientific operations for three orbiting astrophysics missions WMAP, RXTE, and Swift, as well as the Science Support Center for the Fermi Gamma-ray Space Telescope. A number of key technologies for future missions are also under development in the Division, including X-ray mirrors, space-based interferometry, high contract imaging techniques to serch for exoplanets, and new detectors operating at gamma-ray, X-ray, ultraviolet, infrared, and radio wavelengths. The overriding goals of ASD are to carry out cutting-edge scientific research, and provide Project Scientist support for spaceflight missions, implement the goals of the NASA Strategic Plan, serve and suppport the astronomical community, and enable future missions by conceiving new conepts and inventing new technologies.
The Astrophysics Science Division Annual Report 2009
NASA Technical Reports Server (NTRS)
Oegerle, William (Editor); Reddy, Francis (Editor); Tyler, Pat (Editor)
2010-01-01
The Astrophysics Science Division (ASD) at Goddard Space Flight Center (GSFC) is one of the largest and most diverse astrophysical organizations in the world, with activities spanning a broad range of topics in theory, observation, and mission and technology development. Scientific research is carried out over the entire electromagnetic spectrum - from gamma rays to radio wavelengths - as well as particle physics and gravitational radiation. Members of ASD also provide the scientific operations for three orbiting astrophysics missions - WMAP, RXTE, and Swift, as well as the Science Support Center for the Fermi Gamma-ray Space Telescope. A number of key technologies for future missions are also under development in the Division, including X-ray mirrors, space-based interferometry, high contrast imaging techniques to search for exoplanets, and new detectors operating at gamma-ray, X-ray, ultraviolet, infrared, and radio wavelengths. The overriding goals of ASD are to carry out cutting-edge scientific research, provide Project Scientist support for spaceflight missions, implement the goals of the NASA Strategic Plan, serve and support the astronomical community, and enable future missions by conceiving new concepts and inventing new technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Gavin; Contaldi, Carlo R., E-mail: gavin.nicholson05@imperial.ac.uk, E-mail: c.contaldi@imperial.ac.uk
2009-07-01
We develop a method to reconstruct the primordial power spectrum, P(k), using both temperature and polarisation data from the joint analysis of a number of Cosmic Microwave Background (CMB) observations. The method is an extension of the Richardson-Lucy algorithm, first applied in this context by Shafieloo and Souradeep [1]. We show how the inclusion of polarisation measurements can decrease the uncertainty in the reconstructed power spectrum. In particular, the polarisation data can constrain oscillations in the spectrum more effectively than total intensity only measurements. We apply the estimator to a compilation of current CMB results. The reconstructed spectrum is consistentmore » with the best-fit power spectrum although we find evidence for a 'dip' in the power on scales k ≈ 0.002 Mpc{sup −1}. This feature appears to be associated with the WMAP power in the region 18 ≤ l ≤ 26 which is consistently below best-fit models. We also forecast the reconstruction for a simulated, Planck-like [2] survey including sample variance limited polarisation data.« less
Investigating the possibility of a turning point in the dark energy equation of state
NASA Astrophysics Data System (ADS)
Hu, YaZhou; Li, Miao; Li, XiaoDong; Zhang, ZhenHui
2014-08-01
We investigate a second order parabolic parametrization, w( a) = w t + w a ( a t - a)2, which is a direct characterization of a possible turning in w. The cosmological consequence of this parametrization is explored by using the observational data of the SNLS3 type Ia supernovae sample, the CMB measurements from WMAP9 and Planck, the Hubble parameter measurement from HST, and the baryon acoustic oscillation (BAO) measurements from 6dFGS, BOSS DR11 and improved WiggleZ. We found the existence of a turning point in w at a ˜ 0.7 is favored at 1 σ CL. In the epoch 0.55 < a < 0.9, w < -1 is favored at 1 σ CL, and this significance increases near a = 0.8, reaching a 2 σ CL. The parabolic parametrization achieve equivalent performance to the ΛCDM and Chevallier-Polarski-Linder (CPL) models when the Akaike information criterion was used to assess them. Our analysis shows the value of considering high order parametrizations when studying the cosmological constraints on w.
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Spectral Analysis of B Stars: An Application of Bayesian Statistics
NASA Astrophysics Data System (ADS)
Mugnes, J.-M.; Robert, C.
2012-12-01
To better understand the processes involved in stellar physics, it is necessary to obtain accurate stellar parameters (effective temperature, surface gravity, abundances…). Spectral analysis is a powerful tool for investigating stars, but it is also vital to reduce uncertainties at a decent computational cost. Here we present a spectral analysis method based on a combination of Bayesian statistics and grids of synthetic spectra obtained with TLUSTY. This method simultaneously constrains the stellar parameters by using all the lines accessible in observed spectra and thus greatly reduces uncertainties and improves the overall spectrum fitting. Preliminary results are shown using spectra from the Observatoire du Mont-Mégantic.
Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.
Salama, Mhd Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Fortunato, Laura; Holden, Clare; Mace, Ruth
2006-12-01
Significant amounts of wealth have been exchanged as part of marriage settlements throughout history. Although various models have been proposed for interpreting these practices, their development over time has not been investigated systematically. In this paper we use a Bayesian MCMC phylogenetic comparative approach to reconstruct the evolution of two forms of wealth transfers at marriage, dowry and bridewealth, for 51 Indo-European cultural groups. Results indicate that dowry is more likely to have been the ancestral practice, and that a minimum of four changes to bridewealth is necessary to explain the observed distribution of the two states across the cultural groups.
IMAGINE: Interstellar MAGnetic field INference Engine
NASA Astrophysics Data System (ADS)
Steininger, Theo
2018-03-01
IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.
NASA Astrophysics Data System (ADS)
Pietrobon, Davide; Balbi, Amedeo; Marinucci, Domenico
2006-08-01
We cross correlate the new 3 year Wilkinson Microwave Anistropy Probe (WMAP) cosmic microwave background data with the NRAO VLA Sky Survey radio galaxy data and find further evidence of late integrated Sachs-Wolfe (ISW) effect taking place at late times in cosmic history. Our detection makes use of a novel statistical method (P. Baldi, G. Kerkyacharian, D. Marinucci, and D. Picard, math.ST/0606154 and P. Baldi, G. Kerkyacharian, D. Marinucci, D. Picard, math.ST/0606599) based on a new construction of spherical wavelets, called needlets. The null hypothesis (no ISW) is excluded at more than 99.7% confidence. When we compare the measured cross correlation with the theoretical predictions of standard, flat cosmological models with a generalized dark energy component parameterized by its density, ΩDE, equation of state w and speed of sound cs2, we find 0.3≤ΩDE≤0.8 at 95% C.L., independently of cs2 and w. If dark energy is assumed to be a cosmological constant (w=-1), the bound on density shrinks to 0.41≤ΩDE≤0.79. Models without dark energy are excluded at more than 4σ. The bounds on w depend rather strongly on the assumed value of cs2. We find that models with more negative equation of state (such as phantom models) are a worse fit to the data in the case cs2=1 than in the case cs2=0.
The Correlation Function of Galaxy Clusters and Detection of Baryon Acoustic Oscillations
NASA Astrophysics Data System (ADS)
Hong, T.; Han, J. L.; Wen, Z. L.; Sun, L.; Zhan, H.
2012-04-01
We calculate the correlation function of 13,904 galaxy clusters of z <= 0.4 selected from the cluster catalog of Wen et al. The correlation function can be fitted with a power-law model ξ(r) = (r/R 0)-γ on the scales of 10 h -1 Mpc <= r <= 50 h -1 Mpc, with a larger correlation length of R 0 = 18.84 ± 0.27 h -1 Mpc for clusters with a richness of R >= 15 and a smaller length of R 0 = 16.15 ± 0.13 h -1 Mpc for clusters with a richness of R >= 5. The power-law index of γ = 2.1 is found to be almost the same for all cluster subsamples. A pronounced baryon acoustic oscillations (BAO) peak is detected at r ~ 110 h -1 Mpc with a significance of ~1.9σ. By analyzing the correlation function in the range of 20 h -1 Mpc <= r <= 200 h -1 Mpc, we find that the constraints on distance parameters are Dv (zm = 0.276) = 1077 ± 55(1σ) Mpc and h = 0.73 ± 0.039(1σ), which are consistent with the cosmology derived from Wilkinson Microwave Anisotropy Probe (WMAP) seven-year data. However, the BAO signal from the cluster sample is stronger than expected and leads to a rather low matter density Ω m h 2 = 0.093 ± 0.0077(1σ), which deviates from the WMAP7 result by more than 3σ. The correlation function of the GMBCG cluster sample is also calculated and our detection of the BAO feature is confirmed.
Monopole and dipole estimation for multi-frequency sky maps by linear regression
NASA Astrophysics Data System (ADS)
Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.
2017-01-01
We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Zhou, Yinghui; Whitehead, John; Korhonen, Pasi; Mustonen, Mika
2008-03-01
Bayesian decision procedures have recently been developed for dose escalation in phase I clinical trials concerning pharmacokinetic responses observed in healthy volunteers. This article describes how that general methodology was extended and evaluated for implementation in a specific phase I trial of a novel compound. At the time of writing, the study is ongoing, and it will be some time before the sponsor will wish to put the results into the public domain. This article is an account of how the study was designed in a way that should prove to be safe, accurate, and efficient whatever the true nature of the compound. The study involves the observation of two pharmacokinetic endpoints relating to the plasma concentration of the compound itself and of a metabolite as well as a safety endpoint relating to the occurrence of adverse events. Construction of the design and its evaluation via simulation are presented.
Optical+Near-IR Bayesian Classification of Quasars
NASA Astrophysics Data System (ADS)
Mehta, Sajjan S.; Richards, G. T.; Myers, A. D.
2011-05-01
We describe the details of an optimal Bayesian classification of quasars with combined optical+near-IR photometry from the SDSS and UKIDSS LAS surveys. Using only deep co-added SDSS photometry from the "Stripe 82" region and requiring full four-band UKIDSS detections, we reliably identify 2665 quasar candidates with a computed efficiency in excess of 99%. Relaxing the data constraints to combinations of two-band detections yields up to 6424 candidates with minimal trade-off in completeness and efficiency. The completeness and efficiency of the sample are investigated with existing spectra from the SDSS, 2SLAQ, and AUS surveys in addition to recent single-slit observations from Palomar Observatory, which revealed 22 quasars from a subsample of 29 high-z candidates. SDSS-III/BOSS observations will allow further exploration of the completeness/efficiency of the sample over 2.2
Combination of dynamic Bayesian network classifiers for the recognition of degraded characters
NASA Astrophysics Data System (ADS)
Likforman-Sulem, Laurence; Sigelle, Marc
2009-01-01
We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.
Determining open cluster membership. A Bayesian framework for quantitative member classification
NASA Astrophysics Data System (ADS)
Stott, Jonathan J.
2018-01-01
Aims: My goal is to develop a quantitative algorithm for assessing open cluster membership probabilities. The algorithm is designed to work with single-epoch observations. In its simplest form, only one set of program images and one set of reference images are required. Methods: The algorithm is based on a two-stage joint astrometric and photometric assessment of cluster membership probabilities. The probabilities were computed within a Bayesian framework using any available prior information. Where possible, the algorithm emphasizes simplicity over mathematical sophistication. Results: The algorithm was implemented and tested against three observational fields using published survey data. M 67 and NGC 654 were selected as cluster examples while a third, cluster-free, field was used for the final test data set. The algorithm shows good quantitative agreement with the existing surveys and has a false-positive rate significantly lower than the astrometric or photometric methods used individually.
The development of a probabilistic approach to forecast coastal change
Lentz, Erika E.; Hapke, Cheryl J.; Rosati, Julie D.; Wang, Ping; Roberts, Tiffany M.
2011-01-01
This study demonstrates the applicability of a Bayesian probabilistic model as an effective tool in predicting post-storm beach changes along sandy coastlines. Volume change and net shoreline movement are modeled for two study sites at Fire Island, New York in response to two extratropical storms in 2007 and 2009. Both study areas include modified areas adjacent to unmodified areas in morphologically different segments of coast. Predicted outcomes are evaluated against observed changes to test model accuracy and uncertainty along 163 cross-shore transects. Results show strong agreement in the cross validation of predictions vs. observations, with 70-82% accuracies reported. Although no consistent spatial pattern in inaccurate predictions could be determined, the highest prediction uncertainties appeared in locations that had been recently replenished. Further testing and model refinement are needed; however, these initial results show that Bayesian networks have the potential to serve as important decision-support tools in forecasting coastal change.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Fermi's paradox, extraterrestrial life and the future of humanity: a Bayesian analysis
NASA Astrophysics Data System (ADS)
Verendel, Vilhelm; Häggström, Olle
2017-01-01
The Great Filter interpretation of Fermi's great silence asserts that Npq is not a very large number, where N is the number of potentially life-supporting planets in the observable universe, p is the probability that a randomly chosen such planet develops intelligent life to the level of present-day human civilization, and q is the conditional probability that it then goes on to develop a technological supercivilization visible all over the observable universe. Evidence suggests that N is huge, which implies that pq is very small. Hanson (1998) and Bostrom (2008) have argued that the discovery of extraterrestrial life would point towards p not being small and therefore a very small q, which can be seen as bad news for humanity's prospects of colonizing the universe. Here we investigate whether a Bayesian analysis supports their argument, and the answer turns out to depend critically on the choice of prior distribution.
Context Effects in Multi-Alternative Decision Making: Empirical Data and a Bayesian Model
ERIC Educational Resources Information Center
Hawkins, Guy; Brown, Scott D.; Steyvers, Mark; Wagenmakers, Eric-Jan
2012-01-01
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates--sometimes error rates increase with the number of choice alternatives, and…
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
Model Diagnostics for Bayesian Networks
ERIC Educational Resources Information Center
Sinharay, Sandip
2006-01-01
Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…
A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research
van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG
2014-01-01
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
NASA Astrophysics Data System (ADS)
Gomes, Guilherme J. C.; Vrugt, Jasper A.; Vargas, Eurípedes A.
2016-04-01
The depth to bedrock controls a myriad of processes by influencing subsurface flow paths, erosion rates, soil moisture, and water uptake by plant roots. As hillslope interiors are very difficult and costly to illuminate and access, the topography of the bedrock surface is largely unknown. This essay is concerned with the prediction of spatial patterns in the depth to bedrock (DTB) using high-resolution topographic data, numerical modeling, and Bayesian analysis. Our DTB model builds on the bottom-up control on fresh-bedrock topography hypothesis of Rempe and Dietrich (2014) and includes a mass movement and bedrock-valley morphology term to extent the usefulness and general applicability of the model. We reconcile the DTB model with field observations using Bayesian analysis with the DREAM algorithm. We investigate explicitly the benefits of using spatially distributed parameter values to account implicitly, and in a relatively simple way, for rock mass heterogeneities that are very difficult, if not impossible, to characterize adequately in the field. We illustrate our method using an artificial data set of bedrock depth observations and then evaluate our DTB model with real-world data collected at the Papagaio river basin in Rio de Janeiro, Brazil. Our results demonstrate that the DTB model predicts accurately the observed bedrock depth data. The posterior mean DTB simulation is shown to be in good agreement with the measured data. The posterior prediction uncertainty of the DTB model can be propagated forward through hydromechanical models to derive probabilistic estimates of factors of safety.
A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research
ERIC Educational Resources Information Center
van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A. G.
2014-01-01
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are…
Bartlett, Jonathan W; Keogh, Ruth H
2018-06-01
Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.
Eckstein, Miguel P; Mack, Stephen C; Liston, Dorion B; Bogush, Lisa; Menzel, Randolf; Krauzlis, Richard J
2013-06-07
Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.
Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta
2012-01-01
We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
NASA Technical Reports Server (NTRS)
Solakiewiz, Richard; Koshak, William
2008-01-01
Continuous monitoring of the ratio of cloud flashes to ground flashes may provide a better understanding of thunderstorm dynamics, intensification, and evolution, and it may be useful in severe weather warning. The National Lighting Detection Network TM (NLDN) senses ground flashes with exceptional detection efficiency and accuracy over most of the continental United States. A proposed Geostationary Lightning Mapper (GLM) aboard the Geostationary Operational Environmental Satellite (GOES-R) will look at the western hemisphere, and among the lightning data products to be made available will be the fundamental optical flash parameters for both cloud and ground flashes: radiance, area, duration, number of optical groups, and number of optical events. Previous studies have demonstrated that the optical flash parameter statistics of ground and cloud lightning, which are observable from space, are significantly different. This study investigates a Bayesian network methodology for discriminating lightning flash type (ground or cloud) using the lightning optical data and ancillary GOES-R data. A Directed Acyclic Graph (DAG) is set up with lightning as a "root" and data observed by GLM as the "leaves." This allows for a direct calculation of the joint probability distribution function for the lighting type and radiance, area, etc. Initially, the conditional probabilities that will be required can be estimated from the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) together with NLDN data. Directly manipulating the joint distribution will yield the conditional probability that a lightning flash is a ground flash given the evidence, which consists of the observed lightning optical data [and possibly cloud data retrieved from the GOES-R Advanced Baseline Imager (ABI) in a more mature Bayesian network configuration]. Later, actual GLM and NLDN data can be used to refine the estimates of the conditional probabilities used in the model; i.e., the Bayesian network is a learning network. Methods for efficient calculation of the conditional probabilities (e.g., an algorithm using junction trees), finding data conflicts, goodness of fit, and dealing with missing data will also be addressed.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Mapping the CMB with the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary
2007-01-01
The data from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission results will be discussed and commented on.
Chaotic hybrid inflation with a gauged B -L
NASA Astrophysics Data System (ADS)
Carpenter, Linda M.; Raby, Stuart
2014-11-01
In this paper we present a novel formulation of chaotic hybrid inflation in supergravity. The model includes a waterfall field which spontaneously breaks a gauged U1 (B- L) at a GUT scale. This allows for the possibility of future model building which includes the standard formulation of baryogenesis via leptogenesis with the waterfall field decaying into right-handed neutrinos. We have not considered the following issues in this short paper, i.e. supersymmetry breaking, dark matter or the gravitino or moduli problems. Our focus is on showing the compatibility of the present model with Planck, WMAP and Bicep2 data.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Missing Link: Bayesian detection and measurement of intermediate-mass black-hole binaries
NASA Astrophysics Data System (ADS)
Graff, Philip B.; Buonanno, Alessandra; Sathyaprakash, B. S.
2015-07-01
We perform Bayesian analysis of gravitational-wave signals from nonspinning, intermediate-mass black-hole binaries (IMBHBs) with observed total mass, Mobs, from 50 M⊙ to 500 M⊙ and mass ratio 1-4 using advanced LIGO and Virgo detectors. We employ inspiral-merger-ringdown waveform models based on the effective-one-body formalism and include subleading modes of radiation beyond the leading (2,2) mode. The presence of subleading modes increases signal power for inclined binaries and allows for improved accuracy and precision in measurements of the masses as well as breaking of degeneracies in distance, orientation and polarization. For low total masses, Mobs≲50 M⊙ , for which the inspiral signal dominates, the observed chirp mass Mobs=Mobsη3 /5 (η being the symmetric mass ratio) is better measured. In contrast, as increasing power comes from merger and ringdown, we find that the total mass Mobs has better relative precision than Mobs. Indeed, at high Mobs (≥300 M⊙ ), the signal resembles a burst and the measurement thus extracts the dominant frequency of the signal that depends on Mobs. Depending on the binary's inclination, at signal-to-noise ratio (SNR) of 12, uncertainties in Mobs can be as large as ˜20 - 25 % while uncertainties in Mobs are ˜50 - 60 % in binaries with unequal masses (those numbers become ˜17 % vs. ˜22 % in more symmetric mass-ratio binaries). Although large, those uncertainties in Mobs will establish the existence of IMBHs. We find that effective-one-body waveforms with subleading modes are essential to confirm a signal's presence in the data, with calculated Bayesian evidences yielding a false alarm probability below 10-5 for SNR ≳9 in Gaussian noise. Our results show that gravitational-wave observations can offer a unique tool to observe and understand the formation, evolution and demographics of IMBHs, which are difficult to observe in the electromagnetic window.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.