Science.gov

Sample records for 21-cm intensity mapping

  1. 21-cm Intensity Mapping

    NASA Astrophysics Data System (ADS)

    Chang, Tzu-Ching; GBT-HIM Team

    2016-01-01

    The redshifted 21-cm emission from neutral hydrogen has emerged as a powerful probe for large-scale structure; a significant fraction of the observable universe can be mapped in the Intensity Mapping regime out to high redshifts. At redshifts around unity, the 21-cm emission traces the matter distribution and can be used to measure the Baryon Acoustic Oscillation (BAO) signature and constrain dark energy properties. I will describe our HI Intensity Mapping program at the Green Bank Telescope (GBT), aiming at measuring the 21cm power spectrum at z=0.8. A 800-MHz multi-beam focal-plane array for the GBT is currently under construction in order to facilitate a large-scale survey for BAO and the redshift-space distortion measurements for cosmological constraints.

  2. Advancing precision cosmology with 21 cm intensity mapping

    NASA Astrophysics Data System (ADS)

    Masui, Kiyoshi Wesley

    In this thesis we make progress toward establishing the observational method of 21 cm intensity mapping as a sensitive and efficient method for mapping the large-scale structure of the Universe. In Part I we undertake theoretical studies to better understand the potential of intensity mapping. This includes forecasting the ability of intensity mapping experiments to constrain alternative explanations to dark energy for the Universe's accelerated expansion. We also considered how 21 cm observations of the neutral gas in the early Universe (after recombination but before reionization) could be used to detect primordial gravity waves, thus providing a window into cosmological inflation. Finally we showed that scientifically interesting measurements could in principle be performed using intensity mapping in the near term, using existing telescopes in pilot surveys or prototypes for larger dedicated surveys. Part II describes observational efforts to perform some of the first measurements using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing intensity mapping data from single dish radio telescopes. We then apply the pipeline to observations using the Green Bank Telescope. By cross-correlating the intensity mapping survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude and we thus constrain the signal from both above and below. This pilot survey represents a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.

  3. Intensity Mapping During Reionization: 21 cm and Cross-correlations

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; HERA Collaboration

    2016-01-01

    The first generation of 21 cm epoch of reionization (EoR) experiments are now reaching the sensitivities necessary for a detection of the power spectrum of plausible reionization models, and with the advent of next-generation capabilities (e.g. the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometer Array Phase I Low) will move beyond the power spectrum to imaging of the EoR intergalactic medium. Such datasets provide context to galaxy evolution studies for the earliest galaxies on scales of tens of Mpc, but at present wide, deep galaxy surveys are lacking, and attaining the depth to survey the bulk of galaxies responsible for reionization will be challenging even for JWST. Thus we seek useful cross-correlations with other more direct tracers of the galaxy population. I review near-term prospects for cross-correlation studies with 21 cm and CO and CII emission, as well as future far-infrared misions suchas CALISTO.

  4. Prospects of probing quintessence with HI 21-cm intensity mapping survey

    NASA Astrophysics Data System (ADS)

    Hussain, Azam; Thakur, Shruti; Sarkar, Tapomoy Guha; Sen, Anjan A.

    2016-09-01

    We investigate the prospect of constraining scalar field dark energy models using HI 21-cm intensity mapping surveys. We consider a wide class of coupled scalar field dark energy models whose predictions about the background cosmological evolution are different from the ΛCDM predictions by a few percent. We find that these models can be statistically distinguished from ΛCDM through their imprint on the 21-cm angular power spectrum. At the fiducial z = 1.5, corresponding to a radio interferometric observation of the post-reionization HI 21 cm observation at frequency 568 MHz, these models can infact be distinguished from the ΛCDM model at SNR > 3σ level using a 10,000 hr radio observation distributed over 40 pointings of a SKA1-mid like radio-telescope. We also show that tracker models are more likely to be ruled out in comparison with ΛCDM than the thawer models. Future radio observations can be instrumental in obtaining tighter constraints on the parameter space of dark energy models and supplement the bounds obtained from background studies.

  5. An intensity map of hydrogen 21-cm emission at redshift z approximately 0.8.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Bandura, Kevin; Peterson, Jeffrey B

    2010-07-22

    Observations of 21-cm radio emission by neutral hydrogen at redshifts z approximately 0.5 to approximately 2.5 are expected to provide a sensitive probe of cosmic dark energy. This is particularly true around the onset of acceleration at z approximately 1, where traditional optical cosmology becomes very difficult because of the infrared opacity of the atmosphere. Hitherto, 21-cm emission has been detected only to z = 0.24. More distant galaxies generally are too faint for individual detections but it is possible to measure the aggregate emission from many unresolved galaxies in the 'cosmic web'. Here we report a three-dimensional 21-cm intensity field at z = 0.53 to 1.12. We then co-add neutral-hydrogen (H i) emission from the volumes surrounding about 10,000 galaxies (from the DEEP2 optical galaxy redshift survey). We detect the aggregate 21-cm glow at a significance of approximately 4sigma. PMID:20651685

  6. Cross-correlation cosmography with intensity mapping of the neutral hydrogen 21 cm emission

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.

    2015-11-01

    The cross-correlation of a foreground density field with two different background convergence fields can be used to measure cosmographic distance ratios and constrain dark energy parameters. We investigate the possibility of performing such measurements using a combination of optical galaxy surveys and neutral hydrogen (HI) intensity mapping surveys, with emphasis on the performance of the planned Square Kilometre Array (SKA). Using HI intensity mapping to probe the foreground density tracer field and/or the background source fields has the advantage of excellent redshift resolution and a longer lever arm achieved by using the lensing signal from high redshift background sources. Our results show that, for our best SKA-optical configuration of surveys, a constant equation of state for dark energy can be constrained to ≃8 % for a sky coverage fsky=0.5 and assuming a σ (ΩDE)=0.03 prior for the dark energy density parameter. We also show that using the cosmic microwave background as the second source plane is not competitive, even when considering a COrE-like satellite.

  7. Cosmology on Ultralarge Scales with Intensity Mapping of the Neutral Hydrogen 21 cm Emission: Limits on Primordial Non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Santos, Mário G.; Ferreira, Pedro G.; Ferramacho, Luís

    2013-10-01

    The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z≃1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter fNL, with an error close to σfNL˜1.

  8. Mapping Cosmic Structure Using 21-cm Hydrogen Signal at Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; GBT 21-cm Intensity Mapping Group

    2011-05-01

    We are using the Green Bank Telescope to make 21-cm intensity maps of cosmic structure in a 0.15 Gpc^3 box at redshift of z 1. The intensity mapping technique combines the flux from many galaxies in each pixel, allowing much greater mapping speed than the traditional redshift survey. Measurement is being made at z 1 to take advantage of a window in frequency around 700 MHz where terrestrial radio frequency interference (RFI) is currently at a minimum. This minimum is due to a reallocation of this frequency band from analog television to wide area wireless internet and public service usage. We will report progress of our attempt to detect autocorrelation of the 21-cm signal. The ultimate goal of this mapping is to use Baryon Acoustic Oscillations to provide more precise constraints to dark energy models.

  9. Mapping kiloparsec-scale structures in the extended H I disc of the galaxy UGC 000439 by H I 21-cm absorption

    NASA Astrophysics Data System (ADS)

    Dutta, R.; Gupta, N.; Srianand, R.; O'Meara, J. M.

    2016-03-01

    We study the properties of H I gas in the outer regions (˜2r25) of a spiral galaxy, UGC 00439 (z = 0.017 69), using H I 21-cm absorption towards different components of an extended background radio source, J0041-0043 (z = 1.679). The radio source exhibits a compact core coincident with the optical quasar and two lobes separated by ˜7 kpc, all at an impact parameter ˜25 kpc. The H I 21-cm absorption detected towards the southern lobe is found to extend over ˜2 kpc2. The absorbing gas shows sub-kpc-scale structures with the line-of-sight velocities dominated by turbulent motions. Much larger optical depth variations over 4-7 kpc scale are revealed by the non-detection of H I 21-cm absorption towards the radio core and the northern lobe, and the detection of Na I and Ca II absorption towards the quasar. This could reflect a patchy distribution of cold gas in the extended H I disc. We also detect H I 21-cm emission from UGC 00439 and two other galaxies within ˜150 kpc to it, that probably form an interacting group. However, no H I 21-cm emission from the absorbing gas is detected. Assuming a linear extent of ˜4 kpc, as required to cover both the core and the southern lobe, we constrain the spin temperature ≲ 300 K for the absorbing gas. The kinematics of the gas and the lack of signatures of any ongoing in situ star formation are consistent with the absorbing gas being at the kinematical minor axis and corotating with the galaxy. Deeper H I 21-cm observations would help to map in greater detail both the large- and small-scale structures in the H I gas associated with UGC 00439.

  10. 21cm Cosmology

    NASA Astrophysics Data System (ADS)

    Santos, Mario G.; Alonso, David; Bull, Philip; Camera, Stefano; Ferreira, Pedro G.

    2014-05-01

    A new generation of radio telescopes with unprecedented capabilities for astronomy and fundamental physics will be in operation over the next few years. With high sensitivities and large fields of view, they are ideal for cosmological applications. We discuss their uses for cosmology focusing on the observational technique of HI intensity mapping, in particular at low redshifts (z < 4). This novel observational window promises to bring new insights for cosmology, in particular on ultra-large scales and at a redshift range that can go beyond the dark energy domination epoch. In terms of standard constraints on the dark energy equation of state, telescopes such as Phase I of the SKA should be able to obtain constrains about as well as a future galaxy redshift surveys. Statistical techniques to deal with foregrounds and calibration issues, as well as possible systematics are also discussed.

  11. Mapmaking for precision 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Tegmark, Max; Liu, Adrian; Ewall-Wice, Aaron; Hewitt, Jacqueline N.; Morales, Miguel F.; Neben, Abraham R.; Parsons, Aaron R.; Zheng, Haoxuan

    2015-01-01

    In order to study the "Cosmic Dawn" and the Epoch of Reionization with 21 cm tomography, we need to statistically separate the cosmological signal from foregrounds known to be orders of magnitude brighter. Over the last few years, we have learned much about the role our telescopes play in creating a putatively foreground-free region called the "EoR window." In this work, we examine how an interferometer's effects can be taken into account in a way that allows for the rigorous estimation of 21 cm power spectra from interferometric maps while mitigating foreground contamination and thus increasing sensitivity. This requires a precise understanding of the statistical relationship between the maps we make and the underlying true sky. While some of these calculations would be computationally infeasible if performed exactly, we explore several well-controlled approximations that make mapmaking and the calculation of map statistics much faster, especially for compact and highly redundant interferometers designed specifically for 21 cm cosmology. We demonstrate the utility of these methods and the parametrized trade-offs between accuracy and speed using one such telescope, the upcoming Hydrogen Epoch of Reionization Array, as a case study.

  12. Combining galaxy and 21-cm surveys

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.; White, Martin; Chang, Tzu-Ching; Holder, Gil; Padmanabhan, Nikhil; Doré, Olivier

    2016-04-01

    Acoustic waves travelling through the early Universe imprint a characteristic scale in the clustering of galaxies, QSOs and intergalactic gas. This scale can be used as a standard ruler to map the expansion history of the Universe, a technique known as baryon acoustic oscillations (BAO). BAO offer a high-precision, low-systematics means of constraining our cosmological model. The statistical power of BAO measurements can be improved if the `smearing' of the acoustic feature by non-linear structure formation is undone in a process known as reconstruction. In this paper, we use low-order Lagrangian perturbation theory to study the ability of 21-cm experiments to perform reconstruction and how augmenting these surveys with galaxy redshift surveys at relatively low number densities can improve performance. We find that the critical number density which must be achieved in order to benefit 21-cm surveys is set by the linear theory power spectrum near its peak, and corresponds to densities achievable by upcoming surveys of emission line galaxies such as eBOSS and DESI. As part of this work, we analyse reconstruction within the framework of Lagrangian perturbation theory with local Lagrangian bias, redshift-space distortions, {k}-dependent noise and anisotropic filtering schemes.

  13. The foreground wedge and 21-cm BAO surveys

    NASA Astrophysics Data System (ADS)

    Seo, Hee-Jong; Hirata, Christopher M.

    2016-03-01

    Redshifted H I 21 cm emission from unresolved low-redshift large-scale structure is a promising window for ground-based baryon acoustic oscillations (BAO) observations. A major challenge for this method is separating the cosmic signal from the foregrounds of Galactic and extra-Galactic origins that are stronger by many orders of magnitude than the former. The smooth frequency spectrum expected for the foregrounds would nominally contaminate only very small k∥ modes; however, the chromatic response of the telescope antenna pattern at this wavelength to the foreground introduces non-smooth structure, pervasively contaminating the cosmic signal over the physical scales of our interest. Such contamination defines a wedged volume in Fourier space around the transverse modes that is inaccessible for the cosmic signal. In this paper, we test the effect of this contaminated wedge on the future 21-cm BAO surveys using Fisher information matrix calculation. We include the signal improvement due to the BAO reconstruction technique that has been used for galaxy surveys and test the effect of this wedge on the BAO reconstruction as a function of signal to noises and incorporate the results in the Fisher matrix calculation. We find that the wedge effect expected at z = 1-2 is very detrimental to the angular diameter distances: the errors on angular diameter distances increased by 3-4.4 times, while the errors on H(z) increased by a factor of 1.5-1.6. We conclude that calibration techniques that clean out the foreground `wedge' would be extremely valuable for constraining angular diameter distances from intensity-mapping 21-cm surveys.

  14. Bayesian Semi-blind Component Separation for Foreground Removal in Interferometric 21 cm Observations

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Bunn, Emory F.; Karakci, Ata; Korotkov, Andrei; Sutter, P. M.; Timbie, Peter T.; Tucker, Gregory S.; Wandelt, Benjamin D.

    2016-01-01

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation-Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.

  15. Modelling the cosmic neutral hydrogen from DLAs and 21-cm observations

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Hamsa; Choudhury, T. Roy; Refregier, Alexandre

    2016-05-01

    We review the analytical prescriptions in the literature to model the 21-cm (emission line surveys/intensity mapping experiments) and Damped Lyman-Alpha (DLA) observations of neutral hydrogen (H I) in the post-reionization universe. While these two sets of prescriptions have typically been applied separately for the two probes, we attempt to connect these approaches to explore the consequences for the distribution and evolution of H I across redshifts. We find that a physically motivated, 21-cm-based prescription, extended to account for the DLA observables provides a good fit to the majority of the available data, but cannot accommodate the recent measurement of the clustering of DLAs at z ˜ 2.3. This highlights a tension between the DLA bias and the 21-cm measurements, unless there is a very significant change in the nature of H I-bearing systems across redshifts 0-3. We discuss the implications of our findings for the characteristic host halo masses of the DLAs and the power spectrum of 21-cm intensity fluctuations.

  16. Probing lepton asymmetry with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2014-09-01

    We investigate the issue of how accurately we can constrain the lepton number asymmetry ξ{sub ν}=μ{sub ν}/T{sub ν} in the Universe by using future observations of 21 cm line fluctuations and cosmic microwave background (CMB). We find that combinations of the 21 cm line and the CMB observations can constrain the lepton asymmetry better than big-bang nucleosynthesis (BBN). Additionally, we also discuss constraints on ξ{sub ν} in the presence of some extra radiation, and show that the 21 cm line observations can substantially improve the constraints obtained by CMB alone, and allow us to distinguish the effects of the lepton asymmetry from the ones of extra radiation.

  17. A Large-Scale Radio Polarization Survey of the Southern Sky at 21cm

    NASA Astrophysics Data System (ADS)

    Testori, J. C.; Reich, P.; Reich, W.

    2004-02-01

    We have successfully reduced the polarization data from the recently published 21 cm continuum survey of the southern sky carried out with a 30-m antenna at Villa Elisa (Argentina). We describe the reduction and calibration methods of the survey. The result is a fully sampled survey, which covers declinations from -90 degrees to -10 degrees with a typical rms-noise of 15 mK TB. The map of polarized intensity shows large regions with smooth low-level emission, but also a number of enhanced high-latitude features. Most of these regions have no counterpart in total intensity and indicate Faraday active regions.

  18. Constraining dark matter through 21-cm observations

    NASA Astrophysics Data System (ADS)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  19. Baryon Acoustic Oscillation Intensity Mapping of Dark Energy

    NASA Astrophysics Data System (ADS)

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B.; McDonald, Patrick

    2008-03-01

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called “dark energy.” To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 109 individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  20. Baryon acoustic oscillation intensity mapping of dark energy.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B; McDonald, Patrick

    2008-03-01

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called "dark energy." To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 10(9) individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy. PMID:18352692

  1. The impact of foregrounds on redshift space distortion measurements with the highly redshifted 21-cm line

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan C.

    2015-02-01

    The highly redshifted 21-cm line of neutral hydrogen has become recognized as a unique probe of cosmology from relatively low redshifts (z ˜ 1) up through the Epoch of Reionization (EoR) (z ˜ 8) and even beyond. To date, most work has focused on recovering the spherically averaged power spectrum of the 21-cm signal, since this approach maximizes the signal to noise in the initial measurement. However, like galaxy surveys, the 21-cm signal is affected by redshift space distortions, and is inherently anisotropic between the line of sight and transverse directions. A measurement of this anisotropy can yield unique cosmological information, potentially even isolating the matter power spectrum from astrophysical effects. However, in interferometric measurements, foregrounds also have an anisotropic footprint between the line of sight and transverse directions: the so-called foreground `wedge'. Although foreground subtraction techniques are actively being developed, a `foreground avoidance' approach of simply ignoring contaminated modes has arguably proven most successful to date. In this work, we analyse the effect of this foreground anisotropy in recovering the redshift space distortion signature in 21-cm measurements at both high and intermediate redshifts. We find the foreground wedge corrupts nearly all of the redshift space signal for even the largest proposed EoR experiments (Hydrogen Epoch of Reionization Array and the Square Kilometre Array), making cosmological information unrecoverable without foreground subtraction. The situation is somewhat improved at lower redshifts, where the redshift-dependent mapping from observed coordinates to cosmological coordinates significantly reduces the size of the wedge. Using only foreground avoidance, we find that a large experiment like Canadian Hydrogen Intensity Mapping Experiment can place non-trivial constraints on cosmological parameters.

  2. Detailed modelling of the 21-cm forest

    NASA Astrophysics Data System (ADS)

    Semelin, B.

    2016-01-01

    The 21-cm forest is a promising probe of the Epoch of Reionization. The local state of the intergalactic medium (IGM) is encoded in the spectrum of a background source (radio-loud quasars or gamma-ray burst afterglow) by absorption at the local 21-cm wavelength, resulting in a continuous and fluctuating absorption level. Small-scale structures (filaments and minihaloes) in the IGM are responsible for the strongest absorption features. The absorption can also be modulated on large scales by inhomogeneous heating and Wouthuysen-Field coupling. We present the results from a simulation that attempts to preserve the cosmological environment while resolving some of the small-scale structures (a few kpc resolution in a 50 h-1 Mpc box). The simulation couples the dynamics and the ionizing radiative transfer and includes X-ray and Lyman lines radiative transfer for a detailed physical modelling. As a result we find that soft X-ray self-shielding, Ly α self-shielding and shock heating all have an impact on the predicted values of the 21-cm optical depth of moderately overdense structures like filaments. A correct treatment of the peculiar velocities is also critical. Modelling these processes seems necessary for accurate predictions and can be done only at high enough resolution. As a result, based on our fiducial model, we estimate that LOFAR should be able to detect a few (strong) absorptions features in a frequency range of a few tens of MHz for a 20 mJy source located at z = 10, while the SKA would extract a large fraction of the absorption information for the same source.

  3. Lensing of 21-cm fluctuations by primordial gravitational waves.

    PubMed

    Book, Laura; Kamionkowski, Marc; Schmidt, Fabian

    2012-05-25

    Weak-gravitational-lensing distortions to the intensity pattern of 21-cm radiation from the dark ages can be decomposed geometrically into curl and curl-free components. Lensing by primordial gravitational waves induces a curl component, while the contribution from lensing by density fluctuations is strongly suppressed. Angular fluctuations in the 21-cm background extend to very small angular scales, and measurements at different frequencies probe different shells in redshift space. There is thus a huge trove of information with which to reconstruct the curl component of the lensing field, allowing tensor-to-scalar ratios conceivably as small as r~10(-9)-far smaller than those currently accessible-to be probed. PMID:23003237

  4. Lensing of 21-cm Fluctuations by Primordial Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Book, Laura; Kamionkowski, Marc; Schmidt, Fabian

    2012-05-01

    Weak-gravitational-lensing distortions to the intensity pattern of 21-cm radiation from the dark ages can be decomposed geometrically into curl and curl-free components. Lensing by primordial gravitational waves induces a curl component, while the contribution from lensing by density fluctuations is strongly suppressed. Angular fluctuations in the 21-cm background extend to very small angular scales, and measurements at different frequencies probe different shells in redshift space. There is thus a huge trove of information with which to reconstruct the curl component of the lensing field, allowing tensor-to-scalar ratios conceivably as small as r˜10-9—far smaller than those currently accessible—to be probed.

  5. Angular 21 cm power spectrum of a scaling distribution of cosmic string wakes

    SciTech Connect

    Hernández, Oscar F.; Wang, Yi; Brandenberger, Robert; Fong, José E-mail: wangyi@physics.mcgill.ca E-mail: jose.fong@ens-lyon.fr

    2011-08-01

    Cosmic string wakes lead to a large signal in 21 cm redshift maps at redshifts larger than that corresponding to reionization. Here, we compute the angular power spectrum of 21 cm radiation as predicted by a scaling distribution of cosmic strings whose wakes have undergone shock heating.

  6. Redundant Array Configurations for 21 cm Cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Parsons, Aaron R.

    2016-08-01

    Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.

  7. Detecting the 21 cm forest in the 21 cm power spectrum

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Mesinger, Andrei; Hewitt, Jacqueline

    2014-07-01

    We describe a new technique for constraining the radio-loud population of active galactic nuclei at high redshift by measuring the imprint of 21 cm spectral absorption features (the 21 cm forest) on the 21 cm power spectrum. Using semi-numerical simulations of the intergalactic medium and a semi-empirical source population, we show that the 21 cm forest dominates a distinctive region of k-space, k ≳ 0.5 Mpc- 1. By simulating foregrounds and noise for current and potential radio arrays, we find that a next-generation instrument with a collecting area of the order of ˜ 0.1 km2 (such as the Hydrogen Epoch of Reionization Array) may separately constrain the X-ray heating history at large spatial scales and radio-loud active galactic nuclei of the model we study at small ones. We extrapolate our detectability predictions for a single radio-loud active galactic nuclei population to arbitrary source scenarios by analytically relating the 21 cm forest power spectrum to the optical depth power spectrum and an integral over the radio luminosity function.

  8. Identifying Ionized Regions in Noisy Redshifted 21 cm Data Sets

    NASA Astrophysics Data System (ADS)

    Malloy, Matthew; Lidz, Adam

    2013-04-01

    One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will instead focus on statistical measurements. Here, we show that it may nonetheless be possible to directly identify ionized regions in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signal during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which ~20% of the volume of the universe is neutral at z ~ 7, we find that a 500-tile MWA may directly identify as many as ~150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.

  9. The 21 cm signature of cosmic string wakes

    SciTech Connect

    Brandenberger, Robert H.; Danos, Rebecca J.; Hernández, Oscar F.; Holder, Gilbert P. E-mail: rjdanos@physics.mcgill.ca E-mail: holder@physics.mcgill.ca

    2010-12-01

    We discuss the signature of a cosmic string wake in 21cm redshift surveys. Since 21cm surveys probe higher redshifts than optical large-scale structure surveys, the signatures of cosmic strings are more manifest in 21cm maps than they are in optical galaxy surveys. We find that, provided the tension of the cosmic string exceeds a critical value (which depends on both the redshift when the string wake is created and the redshift of observation), a cosmic string wake will generate an emission signal with a brightness temperature which approaches a limiting value which at a redshift of z+1 = 30 is close to 400 mK in the limit of large string tension. The signal will have a specific signature in position space: the excess 21cm radiation will be confined to a wedge-shaped region whose tip corresponds to the position of the string, whose planar dimensions are set by the planar dimensions of the string wake, and whose thickness (in redshift direction) depends on the string tension. For wakes created at z{sub i}+1 = 10{sup 3}, then at a redshift of z+1 = 30 the critical value of the string tension μ is Gμ = 6 × 10{sup −7}, and it decreases linearly with redshift (for wakes created at the time of equal matter and radiation, the critical value is a factor of two lower at the same redshift). For smaller tensions, cosmic strings lead to an observable absorption signal with the same wedge geometry.

  10. Overcoming the Challenges of 21cm Cosmology

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan

    The highly-redshifted 21cm line of neutral hydrogen is one of the most promising and unique probes of cosmology for the next decade and beyond. The past few years have seen a number of dedicated experiments targeting the 21cm signal from the Epoch of Reionization (EoR) begin operation, including the LOw-Frequency ARray (LOFAR), the Murchison Widefield Array (MWA), and the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). For these experiments to yield cosmological results, they require new calibration and analysis algorithms which will need to achieve unprecedented levels of separation between the 21cm signal and contaminating foreground emission. Although much work has been spent developing these algorithms over the past decade, their success or failure will ultimately depend on their ability to overcome the complications associated with real-world systems and their inherent complications. The work in this dissertation is closely tied to the late-stage commissioning and early observations with PAPER. The first two chapters focus on developing calibration algorithms to overcome unique problems arising in the PAPER system. To test these algorithms, I rely on not only simulations, but on commissioning observations, ultimately tying the success of the algorithm to its performance on actual, celestial data. The first algorithm works to correct gain-drifts in the PAPER system caused by the heating and cooling of various components (the amplifiers and above ground co-axial cables, in particular). It is shown that a simple measurement of the ambient temperature can remove ˜ 10% gain fluctuations in the observed brightness of calibrator sources. This result is highly encouraging for the ability of PAPER to remove a potentially dominant systematic in its power spectrum and cataloging measurements without resorting to a complicated system overhaul. The second new algorithm developed in this dissertation solves a major calibration challenge not

  11. Cosmology on the largest scales with intensity mapping

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Santos, Mário G.; Ferreira, Pedro G.; Maartens, Roy

    2014-12-01

    We review the state of the art of the study of the cosmic structure on ultra-large scales as is forecast to be achievable by the oncoming generation of intensity mapping experiments. We focus on intensity maps of the redshifted 21 cm line radiation of neutral hydrogen (Hi) in the post-reionisation Universe. Such measurements will be performed by future radio telescopes such as for instance the Square Kilometre Array and will allow for surveying the biggest volume ever of cosmic structure. After having shown why it is valuable to scrutinise such extremely large cosmic scales - they will supply crucial information about the physical processes at play at early times - we concentrate on primordial non- Gaussianity as a working example. We illustrate that Hi intensity mapping experiments can place tight bounds on different inflationary scenarios via constraining the non-Gaussianity parameter, fNL, with an error close to 1.

  12. The cross correlation between the 21-cm radiation and the CMB lensing field: a new cosmological signal

    SciTech Connect

    Vallinotto, Alberto

    2011-01-01

    The measurement of Baryon Acoustic Oscillations through the 21-cm intensity mapping technique at redshift z {<=} 4 has the potential to tightly constrain the evolution of dark energy. Crucial to this experimental effort is the determination of the biasing relation connecting fluctuations in the density of neutral hydrogen (HI) with the ones of the underlying dark matter field. In this work I show how the HI bias relevant to these 21-cm intensity mapping experiments can successfully be measured by cross-correlating their signal with the lensing signal obtained from CMB observations. In particular I show that combining CMB lensing maps from Planck with 21-cm field measurements carried out with an instrument similar to the Cylindrical Radio Telescope, this cross-correlation signal can be detected with a signal-to-noise (S/N) ratio of more than 5. Breaking down the signal arising from different redshift bins of thickness {Delta}z = 0.1, this signal leads to constraining the large scale neutral hydrogen bias and its evolution to 4{sigma} level.

  13. A fully sampled λ21 cm linear polarization survey of the southern sky

    NASA Astrophysics Data System (ADS)

    Testori, J. C.; Reich, P.; Reich, W.

    2008-06-01

    Context: Linear polarization of Galactic synchrotron emission provides valuable information on the Galactic magnetic field and on the properties of the Galactic magneto-ionic medium. Polarized high-latitude Galactic emission is the major foreground for polarization studies of the cosmic microwave background. Aims: We present a new southern-sky λ21 cm linear polarization survey, which complements the recent λ21 cm DRAO northern sky polarization data. Methods: We used a 30-m telescope located at Villa Elisa/Argentina to map the southern sky simultaneously in continuum and linear polarization. Results: We present a fully sampled map of linearly polarized emission at λ21 cm of the southern sky for declinations between -10° and -90°. The angular resolution of the survey is 36' and its sensitivity is 15 mK (rms-noise) in Stokes U and Q. The survey's zero-level has been adjusted to that of the recent DRAO 1.4 GHz linear polarization survey by comparing data in the region of overlap between -10° and -27°. Conclusions: The polarized southern sky at 1.4 GHz shows large areas with smooth low-level emission almost uncorrelated to total intensities indicating that Faraday rotation originating in the Galactic interstellar medium along the line of sight is significant at 1.4 GHz. The southern sky is much less contaminated by local foreground features than is the northern sky. Thus high-frequency observations of polarized cosmic microwave emission are expected to be less affected. The percentage polarization of the high-latitude emission is low, which seems to be an intrinsic property of Galactic emission.

  14. Discovery and First Observations of the 21-cm Hydrogen Line

    NASA Astrophysics Data System (ADS)

    Sullivan, W. T.

    2005-08-01

    Unlike most of the great discoveries in the first decade of radio astronomy after World War II, the 21 cm hydrogen line was first predicted theoretically and then purposely sought. The story is familiar of graduate student Henk van de Hulst's prediction in occupied Holland in 1944 and the nearly simultaneous detection of the line by teams at Harvard, Leiden, and Sydney in 1951. But in this paper I will describe various aspects that are little known: (1) In van de Hulst's original paper he not only worked out possible intensities for the 21 cm line, but also for radio hydrogen recombination lines (not detected until the early 1960s), (2) in that same paper he also used Jansky's and Reber's observations of a radio background to make cosmological conclusions, (3) there was no "race" between the Dutch, Americans, and Australians to detect the line, (4) a fire that destroyed the Dutch team's equipment in March 1950 ironically did not hinder their progress, but actually speeded it up (because it led to a change of their chief engineer, bringing in the talented Lex Muller). The scientific and technical styles of the three groups will also be discussed as results of the vastly differing environments in which they operated.

  15. MEASUREMENT OF 21 cm BRIGHTNESS FLUCTUATIONS AT z {approx} 0.8 IN CROSS-CORRELATION

    SciTech Connect

    Masui, K. W.; Switzer, E. R.; Calin, L.-M.; Pen, U.-L.; Shaw, J. R.; Banavar, N.; Bandura, K.; Blake, C.; Chang, T.-C.; Liao, Y.-W.; Chen, X.; Li, Y.-C.; Natarajan, A.; Peterson, J. B.; Voytek, T. C.

    2013-01-20

    In this Letter, 21 cm intensity maps acquired at the Green Bank Telescope are cross-correlated with large-scale structure traced by galaxies in the WiggleZ Dark Energy Survey. The data span the redshift range 0.6 < z < 1 over two fields totaling {approx}41 deg. sq. and 190 hr of radio integration time. The cross-correlation constrains {Omega}{sub HI} b{sub HI} r = [0.43 {+-} 0.07(stat.) {+-} 0.04(sys.)] Multiplication-Sign 10{sup -3}, where {Omega}{sub HI} is the neutral hydrogen (H I) fraction, r is the galaxy-hydrogen correlation coefficient, and b{sub HI} is the H I bias parameter. This is the most precise constraint on neutral hydrogen density fluctuations in a challenging redshift range. Our measurement improves the previous 21 cm cross-correlation at z {approx} 0.8 both in its precision and in the range of scales probed.

  16. Prospects for clustering and lensing measurements with forthcoming intensity mapping and optical surveys

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.; Metcalf, R. B.

    2016-06-01

    We explore the potential of using intensity mapping surveys (MeerKAT, SKA) and optical galaxy surveys (DES, LSST) to detect H I clustering and weak gravitational lensing of 21 cm emission in auto- and cross-correlation. Our forecasts show that high-precision measurements of the clustering and lensing signals can be made in the near future using the intensity mapping technique. Such studies can be used to test the intensity mapping method, and constrain parameters such as the H I density Ω _{H I}, the H I bias b_{H I} and the galaxy-H I correlation coefficient r_{H I-g}.

  17. Cross-correlation of the cosmic 21-cm signal and Lyman α emitters during reionization

    NASA Astrophysics Data System (ADS)

    Sobacchi, Emanuele; Mesinger, Andrei; Greig, Bradley

    2016-07-01

    Interferometry of the cosmic 21-cm signal is set to revolutionize our understanding of the Epoch of Reionization (EoR), eventually providing 3D maps of the early Universe. Initial detections however will be low signal to noise, limited by systematics. To confirm a putative 21-cm detection, and check the accuracy of 21-cm data analysis pipelines, it would be very useful to cross-correlate against a genuine cosmological signal. The most promising cosmological signals are wide-field maps of Lyman α emitting galaxies (LAEs), expected from the Subaru Hyper-Suprime Cam ultradeep field (UDF). Here we present estimates of the correlation between LAE maps at z ˜ 7 and the 21-cm signal observed by both the Low Frequency Array (LOFAR) and the planned Square Kilometre Array Phase 1 (SKA1). We adopt a systematic approach, varying both: (i) the prescription of assigning LAEs to host haloes; and (ii) the large-scale structure of neutral and ionized regions (i.e. EoR morphology). We find that the LAE-21cm cross-correlation is insensitive to (i), thus making it a robust probe of the EoR. A 1000 h observation with LOFAR would be sufficient to discriminate at ≳ 1σ a fully ionized Universe from one with a mean neutral fraction of bar{x}_{H I}≈ 0.50, using the LAE-21 cm cross-correlation function on scales of R ≈ 3-10 Mpc. Unlike LOFAR, whose detection of the LAE-21 cm cross-correlation is limited by noise, SKA1 is mostly limited by ignorance of the EoR morphology. However, the planned 100 h wide-field SKA1-Low survey will be sufficient to discriminate an ionized Universe from one with bar{x}_{H I}=0.25, even with maximally pessimistic assumptions.

  18. RESEARCH PAPER: Foreground removal of 21 cm fluctuation with multifrequency fitting

    NASA Astrophysics Data System (ADS)

    He, Li-Ping

    2009-06-01

    The 21 centimeter (21 cm) line emission from neutral hydrogen in the intergalactic medium (IGM) at high redshifts is strongly contaminated by foreground sources such as the diffuse Galactic synchrotron emission and free-free emission from the Galaxy, as well as emission from extragalactic radio sources, thus making its observation very complicated. However, the 21 cm signal can be recovered through its structure in frequency space, as the power spectrum of the foreground contamination is expected to be smooth over a wide band in frequency space while the 21 cm fluctuations vary significantly. We use a simple polynomial fitting to reconstruct the 21 cm signal around four frequencies 50, 100, 150 and 200MHz with an especially small channel width of 20 kHz. Our calculations show that this multifrequency fitting approach can effectively recover the 21 cm signal in the frequency range 100 ~ 200 MHz. However, this method doesn't work well around 50 MHz because of the low intensity of the 21 cm signal at this frequency. We also show that the fluctuation of detector noise can be suppressed to a very low level by taking long integration times, which means that we can reach a sensitivity of approx10 mK at 150 MHz with 40 antennas in 120 hours of observations.

  19. Cross-correlation of 21 cm and soft X-ray backgrounds during the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Liang, Jun-Min; Mao, Xiao-Chun; Qin, Bo

    2016-08-01

    The cross-correlation between the high-redshift 21 cm background and the Soft X-ray Background (SXB) of the Universe may provide an additional probe of the Epoch of Reionization. Here we use semi-numerical simulations to create 21 cm and soft X-ray intensity maps and construct their cross power spectra. Our results indicate that the cross power spectra are sensitive to the thermal and ionizing states of the intergalactic medium (IGM). The 21 cm background correlates positively to the SXB on large scales during the early stages of the reionization. However as the reionization develops, these two backgrounds turn out to be anti-correlated with each other when more than ∼ 15% of the IGM is ionized in a warm reionization scenario. The anti-correlated power reaches its maximum when the neutral fraction declines to 0.2–0.5. Hence, the trough in the cross power spectrum might be a useful tool for tracing the growth of HII regions during the middle and late stages of the reionization. We estimate the detectability of the cross power spectrum based on the abilities of the Square Kilometre Array and the Wide Field X-ray Telescope (WFXT), and find that to detect the cross power spectrum, the pixel noise of X-ray images has to be at least 4 orders of magnitude lower than that of the WFXT deep survey.

  20. A comparison of neutral hydrogen 21 cm observations with UV and optical absorption-line measurements

    NASA Technical Reports Server (NTRS)

    Giovanelli, R.; York, D. G.; Shull, J. M.; Haynes, M. P.

    1978-01-01

    Several absorption components detected in visible or UV lines have been identified with emission features in new high-resolution, high signal-to-noise 21 cm observations. Stars for which direct overlap is obtained are HD 28497, lambda Ori, mu Col, HD 50896, rho Leo, HD 93521, and HD 219881. With the use of the inferred H I column densities from 21 cm profiles, rather than the integrated column densities obtained from L-alpha, more reliable densities can be derived from the existence of molecular hydrogen. Hence the cloud thicknesses are better determined; and 21 cm emission maps near these stars can be used to obtain dimensions on the plane of the sky. It is now feasible to derive detailed geometries for isolated clumps of gas which produce visual absorption features.

  1. Cosmology from a SKA HI intensity mapping survey

    NASA Astrophysics Data System (ADS)

    Santos, M.; Bull, P.; Alonso, D.; Camera, S.; Ferreira, P.; Bernardi, G.; Maartens, R.; Viel, M.; Villaescusa-Navarro, F.; Abdalla, F. B.; Jarvis, M.; Metcalf, R. B.; Pourtsidou, A.; Wolz, L.

    2015-04-01

    HI intensity mapping (IM) is a novel technique capable of mapping the large-scale structure of the Universe in three dimensions and delivering exquisite constraints on cosmology, by using HI as a biased tracer of the dark matter density field. This is achieved by measuring the intensity of the redshifted 21cm line over the sky in a range of redshifts without the requirement to resolve individual galaxies. In this chapter, we investigate the potential of SKA1 to deliver HI intensity maps over a broad range of frequencies and a substantial fraction of the sky. By pinning down the baryon acoustic oscillation and redshift space distortion features in the matter power spectrum -- thus determining the expansion and growth history of the Universe -- these surveys can provide powerful tests of dark energy models and modifications to General Relativity. They can also be used to probe physics on extremely large scales, where precise measurements of spatial curvature and primordial non-Gaussianity can be used to test inflation; on small scales, by measuring the sum of neutrino masses; and at high redshifts where non-standard evolution models can be probed. We discuss the impact of foregrounds as well as various instrumental and survey design parameters on the achievable constraints. In particular we analyse the feasibility of using the SKA1 autocorrelations to probe the large-scale signal.

  2. Differentiating CDM and baryon isocurvature models with 21 cm fluctuations

    SciTech Connect

    Kawasaki, Masahiro; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: sekiguti@icrr.u-tokyo.ac.jp

    2011-10-01

    We discuss how one can discriminate models with cold dark matter (CDM) and baryon isocurvature fluctuations. Although current observations such as cosmic microwave background (CMB) can severely constrain the fraction of such isocurvature modes in the total density fluctuations, CMB cannot differentiate CDM and baryon ones by the shapes of their power spectra. However, the evolution of CDM and baryon density fluctuations are different for each model, thus it would be possible to discriminate those isocurvature modes by extracting information on the fluctuations of CDM/baryon itself. We discuss that observations of 21 cm fluctuations can in principle differentiate these modes and demonstrate to what extent we can distinguish them with future 21 cm surveys. We show that, when the isocurvature mode has a large blue-tilted initial spectrum, 21 cm surveys can clearly probe the difference.

  3. Precise measurements of primordial power spectrum with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2013-10-01

    We discuss the issue of how precisely we can measure the primordial power spectrum by using future observations of 21 cm fluctuations and cosmic microwave background (CMB). For this purpose, we investigate projected constraints on the quantities characterizing primordial power spectrum: the spectral index n{sub s}, its running α{sub s} and even its higher order running β{sub s}. We show that future 21 cm observations in combinations with CMB would accurately measure above mentioned observables of primordial power spectrum. We also discuss its implications to some explicit inflationary models.

  4. Cosmological constraints from 21cm surveys after reionization

    SciTech Connect

    Visbal, Eli; Loeb, Abraham; Wyithe, Stuart E-mail: aloeb@cfa.harvard.edu

    2009-10-01

    21cm emission from residual neutral hydrogen after the epoch of reionization can be used to trace the cosmological power spectrum of density fluctuations. Using a Fisher matrix formulation, we provide a detailed forecast of the constraints on cosmological parameters that are achievable with this probe. We consider two designs: a scaled-up version of the MWA observatory as well as a Fast Fourier Transform Telescope. We find that 21cm observations dedicated to post-reionization redshifts may yield significantly better constraints than next generation Cosmic Microwave Background (CMB) experiments. We find the constraints on Ω{sub Λ}, Ω{sub m}h{sup 2}, and Ω{sub ν}h{sup 2} to be the strongest, each improved by at least an order of magnitude over the Planck CMB satellite alone for both designs. Our results do not depend as strongly on uncertainties in the astrophysics associated with the ionization of hydrogen as similar 21cm surveys during the epoch of reionization. However, we find that modulation of the 21cm power spectrum from the ionizing background could potentially degrade constraints on the spectral index of the primordial power spectrum and its running by more than an order of magnitude. Our results also depend strongly on the maximum wavenumber of the power spectrum which can be used due to non-linearities.

  5. The 21 cm signature of a cosmic string loop

    SciTech Connect

    Pagano, Michael; Brandenberger, Robert E-mail: rhb@physics.mcgill.ca

    2012-05-01

    Cosmic string loops lead to nonlinear baryon overdensities at early times, even before the time which in the standard LCDM model corresponds to the time of reionization. These overdense structures lead to signals in 21 cm redshift surveys at large redshifts. In this paper, we calculate the amplitude and shape of the string loop-induced 21 cm brightness temperature. We find that a string loop leads to a roughly elliptical region in redshift space with extra 21 cm emission. The excess brightness temperature for strings with a tension close to the current upper bound can be as high as 1deg K for string loops generated at early cosmological times (times comparable to the time of equal matter and radiation) and observed at a redshift of z+1 = 30. The angular extent of these predicted 'bright spots' is x{sup '}. These signals should be detectable in upcoming high redshift 21 cm surveys. We also discuss the application of our results to global monopoles and primordial black holes.

  6. The rise of the first stars: Supersonic streaming, radiative feedback, and 21-cm cosmology

    NASA Astrophysics Data System (ADS)

    Barkana, Rennan

    2016-07-01

    between the dark matter and gas. This effect enhanced large-scale clustering and, if early 21-cm fluctuations were dominated by small galactic halos, it produced a prominent pattern on 100 Mpc scales. Work in this field, focused on understanding the whole era of reionization and cosmic dawn with analytical models and numerical simulations, is likely to grow in intensity and importance, as the theoretical predictions are finally expected to confront 21-cm observations in the coming years.

  7. Studying the first X-ray sources in our Universe with the redshifted 21-cm line

    NASA Astrophysics Data System (ADS)

    Mesinger, Andrei

    2016-04-01

    The cosmological 21-cm line is sensitive to the thermal and ionization state of the intergalactic medium (IGM). As it is a line transition, a given observed frequency can be associated with a cosmological redshift. Thus upcoming next-generation radio interferometers, such as HERA and SKA, will map out the 3D structure of the early Universe. This 21-cm signal encodes a weath of information about the first galaxies and IGM structures. In particular, X-ray sources in the first galaxies are thought to have heated the IGM to temperatures above the CMB temperature, well before cosmic reionization. The spatial structure of the 21-cm signal during this epoch of X-ray heating encodes invaluable information about the X-ray luminosity and spectral energy distributions of the first galaxies. I will review this exciting new fronteer, highlighting how the 21-cm line will provide us with a unique opertunity to study high-energy processes inside the first galaxies.

  8. INTERPRETING THE GLOBAL 21 cm SIGNAL FROM HIGH REDSHIFTS. I. MODEL-INDEPENDENT CONSTRAINTS

    SciTech Connect

    Mirocha, Jordan; Harker, Geraint J. A.; Burns, Jack O.

    2013-11-10

    The sky-averaged (global) 21 cm signal is a powerful probe of the intergalactic medium (IGM) prior to the completion of reionization. However, so far it has been unclear whether it will provide more than crude estimates of when the universe's first stars and black holes formed, even in the best case scenario in which the signal is accurately extracted from the foregrounds. In contrast to previous work, which has focused on predicting the 21 cm signatures of the first luminous objects, we investigate an arbitrary realization of the signal and attempt to translate its features to the physical properties of the IGM. Within a simplified global framework, the 21 cm signal yields quantitative constraints on the Lyα background intensity, net heat deposition, ionized fraction, and their time derivatives without invoking models for the astrophysical sources themselves. The 21 cm absorption signal is most easily interpreted, setting strong limits on the heating rate density of the universe with a measurement of its redshift alone, independent of the ionization history or details of the Lyα background evolution. In a companion paper, we extend these results, focusing on the confidence with which one can infer source emissivities from IGM properties.

  9. Precision measurement of cosmic magnification from 21 cm emitting galaxies

    SciTech Connect

    Zhang, Pengjie; Pen, Ue-Li; /Canadian Inst. Theor. Astrophys.

    2005-04-01

    We show how precision lensing measurements can be obtained through the lensing magnification effect in high redshift 21cm emission from galaxies. Normally, cosmic magnification measurements have been seriously complicated by galaxy clustering. With precise redshifts obtained from 21cm emission line wavelength, one can correlate galaxies at different source planes, or exclude close pairs to eliminate such contaminations. We provide forecasts for future surveys, specifically the SKA and CLAR. SKA can achieve percent precision on the dark matter power spectrum and the galaxy dark matter cross correlation power spectrum, while CLAR can measure an accurate cross correlation power spectrum. The neutral hydrogen fraction was most likely significantly higher at high redshifts, which improves the number of observed galaxies significantly, such that also CLAR can measure the dark matter lensing power spectrum. SKA can also allow precise measurement of lensing bispectrum.

  10. The wedge bias in reionization 21-cm power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Jensen, Hannes; Majumdar, Suman; Mellema, Garrelt; Lidz, Adam; Iliev, Ilian T.; Dixon, Keri L.

    2016-02-01

    A proposed method for dealing with foreground emission in upcoming 21-cm observations from the epoch of reionization is to limit observations to an uncontaminated window in Fourier space. Foreground emission can be avoided in this way, since it is limited to a wedge-shaped region in k∥, k⊥ space. However, the power spectrum is anisotropic owing to redshift-space distortions from peculiar velocities. Consequently, the 21-cm power spectrum measured in the foreground avoidance window - which samples only a limited range of angles close to the line-of-sight direction - differs from the full redshift-space spherically averaged power spectrum which requires an average over all angles. In this paper, we calculate the magnitude of this `wedge bias' for the first time. We find that the bias amplifies the difference between the real-space and redshift-space power spectra. The bias is strongest at high redshifts, where measurements using foreground avoidance will overestimate the redshift-space power spectrum by around 100 per cent, possibly obscuring the distinctive rise and fall signature that is anticipated for the spherically averaged 21-cm power spectrum. In the later stages of reionization, the bias becomes negative, and smaller in magnitude (≲20 per cent).

  11. The 21-cm emission from the reionization epoch: extended and point source foregrounds

    NASA Astrophysics Data System (ADS)

    Di Matteo, Tiziana; Ciardi, Benedetta; Miniati, Francesco

    2004-12-01

    Fluctuations in the redshifted 21-cm emission from neutral hydrogen probe the epoch of reionization. We examine the observability of this signal and the impact of extragalactic foreground radio sources (both extended and point-like). We use cosmological simulations to predict the angular correlation functions of intensity fluctuations due to unresolved radio galaxies, cluster radio haloes and relics and free-free emission from the interstellar and intergalactic medium at the frequencies and angular scales relevant for the proposed 21-cm tomography. In accord with previous findings, the brightness temperature fluctuations due to foreground sources are much larger than those from the primary 21-cm signal at all scales. In particular, diffuse cluster radio emission, which has been previously neglected, provides the most significant foreground contamination. However, we show that the contribution to the angular fluctuations at scales θ>~ 1 arcmin is dominated by the spatial clustering of bright foreground sources. This excess can be removed if sources above flux levels S>~ 0.1 mJy (out to redshifts of z~ 1 and z~ 2 for diffuse and point sources, respectively) are detected and removed. Hence, efficient source removal may be sufficient to allow the detection of angular fluctuations in the 21-cm emission free of extragalactic foregrounds at θ>~ 1 arcmin. In addition, the removal of sources above S= 0.1 mJy also reduces the foreground fluctuations to roughly the same level as the 21-cm signal at scales θ<~ 1 arcmin. This should allow the substraction of the foreground components in frequency space, making it possible to observe in detail the topology and history of reionization.

  12. Probing patchy reionization through τ-21 cm correlation statistics

    SciTech Connect

    Meerburg, P. Daniel; Spergel, David N.; Dvorkin, Cora E-mail: dns@astro.princeton.edu

    2013-12-20

    We consider the cross-correlation between free electrons and neutral hydrogen during the epoch of reionization (EoR). The free electrons are traced by the optical depth to reionization τ, while the neutral hydrogen can be observed through 21 cm photon emission. As expected, this correlation is sensitive to the detailed physics of reionization. Foremost, if reionization occurs through the merger of relatively large halos hosting an ionizing source, the free electrons and neutral hydrogen are anticorrelated for most of the reionization history. A positive contribution to the correlation can occur when the halos that can form an ionizing source are small. A measurement of this sign change in the cross-correlation could help disentangle the bias and the ionization history. We estimate the signal-to-noise ratio of the cross-correlation using the estimator for inhomogeneous reionization τ-hat {sub ℓm} proposed by Dvorkin and Smith. We find that with upcoming radio interferometers and cosmic microwave background (CMB) experiments, the cross-correlation is measurable going up to multipoles ℓ ∼ 1000. We also derive parameter constraints and conclude that, despite the foregrounds, the cross-correlation provides a complementary measurement of the EoR parameters to the 21 cm and CMB polarization autocorrelations expected to be observed in the coming decade.

  13. Measuring the Cosmological 21 cm Monopole with an Interferometer

    NASA Astrophysics Data System (ADS)

    Presley, Morgan E.; Liu, Adrian; Parsons, Aaron R.

    2015-08-01

    A measurement of the cosmological 21 {cm} signal remains a promising but as-of-yet unattained ambition of radio astronomy. A positive detection would provide direct observations of key unexplored epochs of our cosmic history, including the cosmic dark ages and reionization. In this paper, we concentrate on measurements of the spatial monopole of the 21 {cm} brightness temperature as a function of redshift (the “global signal”). Most global experiments to date have been single-element experiments. In this paper, we show how an interferometer can be designed to be sensitive to the monopole mode of the sky, thus providing an alternate approach to accessing the global signature. We provide simple rules of thumb for designing a global signal interferometer and use numerical simulations to show that a modest array of tightly packed antenna elements with moderately sized primary beams (FWHM of ∼ 40^\\circ ) can compete with typical single-element experiments in their ability to constrain phenomenological parameters pertaining to reionization and the pre-reionization era. We also provide a general data analysis framework for extracting the global signal from interferometric measurements (with analysis of single-element experiments arising as a special case) and discuss trade-offs with various data analysis choices. Given that interferometric measurements are able to avoid a number of systematics inherent in single-element experiments, our results suggest that interferometry ought to be explored as a complementary way to probe the global signal.

  14. The difference PDF of 21-cm fluctuations: a powerful statistical tool for probing cosmic reionization

    NASA Astrophysics Data System (ADS)

    Barkana, Rennan; Loeb, Abraham

    2008-03-01

    A new generation of radio telescopes are currently being built with the goal of tracing the cosmic distribution of atomic hydrogen at redshifts 6-15 through its 21-cm line. The observations will probe the large-scale brightness fluctuations sourced by ionization fluctuations during cosmic reionization. Since detailed maps will be difficult to extract due to noise and foreground emission, efforts have focused on a statistical detection of the 21-cm fluctuations. During cosmic reionization, these fluctuations are highly non-Gaussian and thus more information can be extracted than just the one-dimensional function that is usually considered, i.e. the correlation function. We calculate a two-dimensional function that if measured observationally would allow a more thorough investigation of the properties of the underlying ionizing sources. This function is the probability distribution function (PDF) of the difference in the 21-cm brightness temperature between two points, as a function of the separation between the points. While the standard correlation function is determined by a complicated mixture of contributions from density and ionization fluctuations, we show that the difference PDF holds the key to separately measuring the statistical properties of the ionized regions.

  15. A Per-baseline, Delay-spectrum Technique for Accessing the 21 cm Cosmic Reionization Signature

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Pober, Jonathan C.; Aguirre, James E.; Carilli, Christopher L.; Jacobs, Daniel C.; Moore, David F.

    2012-09-01

    A critical challenge in measuring the power spectrum of 21 cm emission from cosmic reionization is compensating for the frequency dependence of an interferometer's sampling pattern, which can cause smooth-spectrum foregrounds to appear unsmooth and degrade the separation between foregrounds and the target signal. In this paper, we present an approach to foreground removal that explicitly accounts for this frequency dependence. We apply the delay transformation introduced in Parsons & Backer to each baseline of an interferometer to concentrate smooth-spectrum foregrounds within the bounds of the maximum geometric delays physically realizable on that baseline. By focusing on delay modes that correspond to image-domain regions beyond the horizon, we show that it is possible to avoid the bulk of smooth-spectrum foregrounds. We map the point-spread function of delay modes to k-space, showing that delay modes that are uncorrupted by foregrounds also represent samples of the three-dimensional power spectrum, and can be used to constrain cosmic reionization. Because it uses only spectral smoothness to differentiate foregrounds from the targeted 21 cm signature, this per-baseline analysis approach relies on spectrally and spatially smooth instrumental responses for foreground removal. For sufficient levels of instrumental smoothness relative to the brightness of interfering foregrounds, this technique substantially reduces the level of calibration previously thought necessary to detect 21 cm reionization. As a result, this approach places fewer constraints on antenna configuration within an array, and in particular, facilitates the adoption of configurations that are optimized for power-spectrum sensitivity. Under these assumptions, we demonstrate the potential for the Precision Array for Probing the Epoch of Reionization (PAPER) to detect 21 cm reionization at an amplitude of 10 mK2 near k ~ 0.2 h Mpc-1 with 132 dipoles in 7 months of observing.

  16. A PER-BASELINE, DELAY-SPECTRUM TECHNIQUE FOR ACCESSING THE 21 cm COSMIC REIONIZATION SIGNATURE

    SciTech Connect

    Parsons, Aaron R.; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Carilli, Christopher L.; Jacobs, Daniel C.

    2012-09-10

    A critical challenge in measuring the power spectrum of 21 cm emission from cosmic reionization is compensating for the frequency dependence of an interferometer's sampling pattern, which can cause smooth-spectrum foregrounds to appear unsmooth and degrade the separation between foregrounds and the target signal. In this paper, we present an approach to foreground removal that explicitly accounts for this frequency dependence. We apply the delay transformation introduced in Parsons and Backer to each baseline of an interferometer to concentrate smooth-spectrum foregrounds within the bounds of the maximum geometric delays physically realizable on that baseline. By focusing on delay modes that correspond to image-domain regions beyond the horizon, we show that it is possible to avoid the bulk of smooth-spectrum foregrounds. We map the point-spread function of delay modes to k-space, showing that delay modes that are uncorrupted by foregrounds also represent samples of the three-dimensional power spectrum, and can be used to constrain cosmic reionization. Because it uses only spectral smoothness to differentiate foregrounds from the targeted 21 cm signature, this per-baseline analysis approach relies on spectrally and spatially smooth instrumental responses for foreground removal. For sufficient levels of instrumental smoothness relative to the brightness of interfering foregrounds, this technique substantially reduces the level of calibration previously thought necessary to detect 21 cm reionization. As a result, this approach places fewer constraints on antenna configuration within an array, and in particular, facilitates the adoption of configurations that are optimized for power-spectrum sensitivity. Under these assumptions, we demonstrate the potential for the Precision Array for Probing the Epoch of Reionization (PAPER) to detect 21 cm reionization at an amplitude of 10 mK{sup 2} near k {approx} 0.2 h Mpc{sup -1} with 132 dipoles in 7 months of observing.

  17. Global 21 cm signal experiments: A designer's guide

    NASA Astrophysics Data System (ADS)

    Liu, Adrian; Pritchard, Jonathan R.; Tegmark, Max; Loeb, Abraham

    2013-02-01

    The global (i.e., spatially averaged) spectrum of the redshifted 21 cm line has generated much experimental interest lately, thanks to its potential to be a direct probe of the epoch of reionization and the dark ages, during which the first luminous objects formed. Since the cosmological signal in question has a purely spectral signature, most experiments that have been built, designed, or proposed have essentially no angular sensitivity. This can be problematic because with only spectral information, the expected global 21 cm signal can be difficult to distinguish from foreground contaminants such as galactic synchrotron radiation, since both are spectrally smooth and the latter is many orders of magnitude brighter. In this paper, we establish a systematic mathematical framework for global signal data analysis. The framework removes foregrounds in an optimal manner, complementing spectra with angular information. We use our formalism to explore various experimental design trade-offs, and find that (1) with spectral-only methods, it is mathematically impossible to mitigate errors that arise from uncertainties in one’s foreground model; (2) foreground contamination can be significantly reduced for experiments with fine angular resolution; (3) most of the statistical significance in a positive detection during the dark ages comes from a characteristic high-redshift trough in the 21 cm brightness temperature; (4) measurement errors decrease more rapidly with integration time for instruments with fine angular resolution; and (5) better foreground models can help reduce errors, but once a modeling accuracy of a few percent is reached, significant improvements in accuracy will be required to further improve the measurements. We show that if observations and data analysis algorithms are optimized based on these findings, an instrument with a 5° wide beam can achieve highly significant detections (greater than 5σ) of even extended (high Δz) reionization scenarios

  18. Gravitational-wave detection using redshifted 21-cm observations

    SciTech Connect

    Bharadwaj, Somnath; Guha Sarkar, Tapomoy

    2009-06-15

    A gravitational-wave traversing the line of sight to a distant source produces a frequency shift which contributes to redshift space distortion. As a consequence, gravitational waves are imprinted as density fluctuations in redshift space. The gravitational-wave contribution to the redshift space power spectrum has a different {mu} dependence as compared to the dominant contribution from peculiar velocities. This, in principle, allows the two signals to be separated. The prospect of a detection is most favorable at the highest observable redshift z. Observations of redshifted 21-cm radiation from neutral hydrogen hold the possibility of probing very high redshifts. We consider the possibility of detecting primordial gravitational waves using the redshift space neutral hydrogen power spectrum. However, we find that the gravitational-wave signal, though present, will not be detectable on superhorizon scales because of cosmic variance and on subhorizon scales where the signal is highly suppressed.

  19. 21 cm Power Spectrum Upper Limits from PAPER-64

    NASA Astrophysics Data System (ADS)

    Shiraz Ali, Zaki; Parsons, Aaron; Pober, Jonathan; Team PAPER

    2016-01-01

    We present power spectrum results from the 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER-64). We find an upper limit of Δ2≤(22.4 mK)2 over the range 0.1521 cm power spectrum constraints to date. In addition, we use these results to place lower limits on the spin temperature at a redshift of 8.4. We find that the spin temperature is at least 10K for a neutral fraction between 15% and 80%. This further suggests that there was heating in the early universe through various sources such as x-ray binaries.

  20. Developing an Interferometer to Measure the Global 21cm Monopole

    NASA Astrophysics Data System (ADS)

    Domagalski, Rachel; Patra, Nipanjana; Day, Cherie; Parsons, Aaron

    2016-01-01

    When radio interferometers observe over very small fields of view, they cannot measure the monopole mode of the sky. However, when the field of view extends to a large region of the sky, it becomes possible to use an measure the monopole with an interferometer. We are currently developing such an interferometer at UC Berkeley's Radio Astronomy Lab (RAL) with the goal of measuring the early stages of the Epoch of Reionization by probing the sky for the global 21cm signal between 50 and 100 MHz, and we have deployed a preliminary version of this experiment in Colorado. We present the current status of the interferometer, the future development plans, and some measurements taken in July of 2015. These measurements demonstrate performance of the analog signal chain of the interferometer as well as the RFI environment of the deployment site in Colorado.

  1. Cosmic (Super)String Constraints from 21 cm Radiation

    SciTech Connect

    Khatri, Rishi; Wandelt, Benjamin D.

    2008-03-07

    We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z{>=}30. Future experiments can exploit this effect to constrain the cosmic string tension G{mu} and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of {approx}1 km{sup 2} will not provide any useful constraints, future experiments with a collecting area of 10{sup 4}-10{sup 6} km{sup 2} covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G{mu} > or approx. 10{sup -10}-10{sup -12} (superstring/phase transition mass scale >10{sup 13} GeV)

  2. Cosmic (Super)String Constraints from 21 cm Radiation.

    PubMed

    Khatri, Rishi; Wandelt, Benjamin D

    2008-03-01

    We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z > or =30. Future experiments can exploit this effect to constrain the cosmic string tension G mu and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of approximately 1 km2 will not provide any useful constraints, future experiments with a collecting area of 10(4)-10(6) km2 covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G mu > or = 10(-10)-10(-12) (superstring/phase transition mass scale >10(13) GeV). PMID:18352691

  3. Forecasted 21 cm constraints on compensated isocurvature perturbations

    SciTech Connect

    Gordon, Christopher; Pritchard, Jonathan R.

    2009-09-15

    A 'compensated' isocurvature perturbation consists of an overdensity (or underdensity) in the cold dark matter which is completely cancelled out by a corresponding underdensity (or overdensity) in the baryons. Such a configuration may be generated by a curvaton model of inflation if the cold dark matter is created before curvaton decay and the baryon number is created by the curvaton decay (or vice versa). Compensated isocurvature perturbations, at the level producible by the curvaton model, have no observable effect on cosmic microwave background anisotropies or on galaxy surveys. They can be detected through their effect on the distribution of neutral hydrogen between redshifts 30-300 using 21 cm absorption observations. However, to obtain a good signal to noise ratio, very large observing arrays are needed. We estimate that a fast Fourier transform telescope would need a total collecting area of about 20 square kilometers to detect a curvaton generated compensated isocurvature perturbation at more than 5 sigma significance.

  4. An H I 21-cm line survey of evolved stars

    NASA Astrophysics Data System (ADS)

    Gérard, E.; Le Bertre, T.; Libert, Y.

    2011-12-01

    The HI line at 21 cm is a tracer of circumstellar matter around AGB stars, and especially of the matter located at large distances (0.1-1 pc) from the central stars. It can give unique information on the kinematics and on the physical conditions in the outer parts of circumstellar shells and in the regions where stellar matter is injected into the interstellar medium. However this tracer has not been much used up to now, due to the difficulty of separating the genuine circumstellar emission from the interstellar one. With the Nançay Radiotelescope we are carrying out a survey of the HI emission in a large sample of evolved stars. We report on recent progresses of this long term programme, with emphasis on S-type stars.

  5. The Murchison Widefield Array 21 cm Power Spectrum Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Hazelton, B. J.; Trott, C. M.; Dillon, Joshua S.; Pindor, B.; Sullivan, I. S.; Pober, J. C.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Ewall-Wice, A.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Neben, A. R.; Thyagarajan, N.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, S.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Tingay, S. J.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-07-01

    We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.

  6. HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting

    NASA Astrophysics Data System (ADS)

    Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni

    2016-06-01

    HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.

  7. INTENSITY MAPPING OF MOLECULAR GAS DURING COSMIC REIONIZATION

    SciTech Connect

    Carilli, C. L.

    2011-04-01

    I present a simple calculation of the expected mean CO brightness temperature from the large-scale distribution of galaxies during cosmic reionization. The calculation is based on the cosmic star formation rate density required to reionize, and keep ionized, the intergalactic medium, and uses standard relationships between star formation rate, IR luminosity, and CO luminosity derived for star-forming galaxies over a wide range in redshift. I find that the mean CO brightness temperature resulting from the galaxies that could reionize the universe at z = 8 is T{sub B} {approx} 1.1(C/5)(f{sub esc}/0.1){sup -1}{mu}K, where f{sub esc} is the escape fraction of ionizing photons from the first galaxies and C is the IGM clumping factor. Intensity mapping of the CO emission from the large-scale structure of the star-forming galaxies during cosmic reionization on scales of order 10{sup 2} to 10{sup 3} deg{sup 2}, in combination with H I 21 cm imaging of the neutral IGM, will provide a comprehensive study of the earliest epoch of galaxy formation.

  8. Searching for signatures of cosmic string wakes in 21cm redshift surveys using Minkowski Functionals

    SciTech Connect

    McDonough, Evan; Brandenberger, Robert H. E-mail: rhb@hep.physics.mcgill.ca

    2013-02-01

    Minkowski Functionals are a powerful tool for analyzing large scale structure, in particular if the distribution of matter is highly non-Gaussian, as it is in models in which cosmic strings contribute to structure formation. Here we apply Minkowski functionals to 21cm maps which arise if structure is seeded by a scaling distribution of cosmic strings embeddded in background fluctuations, and then test for the statistical significance of the cosmic string signals using the Fisher combined probability test. We find that this method allows for detection of cosmic strings with Gμ > 5 × 10{sup −8}, which would be improvement over current limits by a factor of about 3.

  9. Cosmic 21 cm delensing of microwave background polarization and the minimum detectable energy scale of inflation.

    PubMed

    Sigurdson, Kris; Cooray, Asantha

    2005-11-18

    We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes. PMID:16384131

  10. Pilot observations at 74 MHz for global 21cm cosmology with the Parkes 64 m

    NASA Astrophysics Data System (ADS)

    Bannister, Keith; McConnell, David; Reynolds, John; Chippendale, Aaron; Landecker, Tom L.; Dunning, Alex

    2013-10-01

    We propose a single pilot observing session using the existing 74 MHz feed at Parkes to evaluate tools and techniques to optimise low frequency (44-88 MHz) observing. 1. A continuum map of the diffuse emission in the Southern sky at 74 MHz. Such a map would be of great help to single-dipole 21cm cosmology experiments, whose diffuse Galactic foregrounds are currently poorly constrained (Pritchard & Loeb, 2010b; de Oliveira-Costa et al., 2008). 2. A wideband (44-88 MHz) map of of the Southern sky, which can be used as a direct detection of the dark ages global signal. Recent theoretical work has shown that the Parkes aperture of 64 m is the optimal size for such a direct detection, which could be achieved at 25? in as little as 100 hrs of observing (Liu et al., 2012). After receiving a 4.1 grade in the previous round, our observations were not scheduled due to limited receiver changes. We are therefore re-proposing as formality. Since the proposal, we have obtained RFI measurements with the feed pointed at zenith. We are confident the dominant source of RFI can be found and removed. If observing at this band is possible, at least two scientific outputs relevant to global 21cm cosmology (among many others) are put within reach:

  11. 21 cm absorption by compact hydrogen discs around black holes in radio-loud nuclei of galaxies

    SciTech Connect

    Loeb, Abraham

    2008-05-15

    The clumpy maser discs observed in some galactic nuclei mark the outskirts of the accretion disc that fuels the central black hole and provide a potential site of nuclear star formation. Unfortunately, most of the gas in maser discs is currently not being probed; large maser gains favor paths that are characterized by a small velocity gradient and require rare edge-on orientations of the disc. Here we propose a method for mapping the atomic hydrogen distribution in nuclear discs through its 21 cm absorption against the radio continuum glow around the central black hole. In NGC 4258, the 21 cm optical depth may approach unity for high angular resolution (VLBI) imaging of coherent clumps which are dominated by thermal broadening and have the column density inferred from x-ray absorption data, {approx}10{sup 23} cm{sup -2}. Spreading the 21 cm absorption over the full rotation velocity width of the material in front of the narrow radio jets gives a mean optical depth of {approx}0.1. Spectroscopic searches for the 21 cm absorption feature in other galaxies can be used to identify the large population of inclined gaseous discs which are not masing in our direction. Follow-up imaging of 21 cm silhouettes of accelerating clumps within these discs can in turn be used to measure cosmological distances.

  12. A 21-cm Neutral Hydrogen Study of Arp 213

    NASA Astrophysics Data System (ADS)

    Wells, S. J.; Simpson, C. E.

    2002-12-01

    We present 21-cm VLA observations of the Sab galaxy Arp 213. An extended HI disk (approx. 2.3 RHolm) was detected, with a bifurcated or extra arm on the west featuring a large HI knot. Based on the kinematics, this knot does not appear to be a dwarf or small companion, but a local enhancement in the arm. Although no unusual kinematics appear in the region of the odd radial dust lanes that attracted Arp's attention to this galaxy, there is a very low level HI cloud just north of the galaxy at the same position angle. The total HI mass for the galaxy was measured to be 2.9 x 109 Msun. Arp 213 has a high rotational velocity (300 km s-1), and a flat rotation curve that rises in the outermost regions. The calculated dynamical mass for the system is quite high at 4.4 x 1011 Msun. The rotation curve and dynamic mass indicate the presence of a large dark matter halo. Further optical data is needed to confirm its mass. This work was supported by NSF grant AST-0097616 and the SARA Consortium REU program.

  13. Enhanced Detectability of Pre-reionization 21 cm Structure

    NASA Astrophysics Data System (ADS)

    Alvarez, Marcelo A.; Pen, Ue-Li; Chang, Tzu-Ching

    2010-11-01

    Before the universe was reionized, it was likely that the spin temperature of intergalactic hydrogen was decoupled from the cosmic microwave background (CMB) by UV radiation from the first stars through the Wouthuysen-Field effect. If the intergalactic medium (IGM) had not yet been heated above the CMB temperature by that time, then the gas would appear in absorption relative to the CMB. Large, rare sources of X-rays could inject sufficient heat into the neutral IGM, so that δTb >0 at comoving distances of tens to hundreds of Mpc, resulting in large 21 cm fluctuations with δTb ~= 250 mK on arcminute to degree angular scales, an order of magnitude larger in amplitude than that caused by ionized bubbles during reionization, δTb ~= 25 mK. This signal could therefore be easier to detect and probe higher redshifts than that due to patchy reionization. For the case in which the first objects to heat the IGM are QSOs hosting 107 M sun black holes with an abundance exceeding ~1 Gpc-3 at z ~ 15, observations with either the Arecibo Observatory or the Five Hundred Meter Aperture Spherical Telescope could detect and image their fluctuations at greater than 5σ significance in about a month of dedicated survey time. Additionally, existing facilities such as MWA and LOFAR could detect the statistical fluctuations arising from a population of 105 M sun black holes with an abundance of ~104 Gpc-3 at z ~= 10-12.

  14. HI Intensity Mapping with FAST

    NASA Astrophysics Data System (ADS)

    Bigot-Sazy, M.-A.; Ma, Y.-Z.; Battye, R. A.; Browne, I. W. A.; Chen, T.; Dickinson, C.; Harper, S.; Maffei, B.; Olivari, L. C.; Wilkinsondagger, P. N.

    2016-02-01

    We discuss the detectability of large-scale HI intensity fluctuations using the FAST telescope. We present forecasts for the accuracy of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. The FAST 19-beam L-band receivers (1.05-1.45 GHz) can provide constraints on the matter power spectrum and dark energy equation of state parameters (w0,wa) that are comparable to the BINGO and CHIME experiments. For one year of integration time we find that the optimal survey area is 6000 deg2. However, observing with larger frequency coverage at higher redshift (0.95-1.35 GHz) improves the projected errorbars on the HI power spectrum by more than 2 σ confidence level. The combined constraints from FAST, CHIME, BINGO and Planck CMB observations can provide reliable, stringent constraints on the dark energy equation of state.

  15. INTENSITY MAPPING OF Ly{alpha} EMISSION DURING THE EPOCH OF REIONIZATION

    SciTech Connect

    Silva, Marta B.; Santos, Mario G.; Gong, Yan; Cooray, Asantha; Bock, James

    2013-02-15

    We calculate the absolute intensity and anisotropies of the Ly{alpha} radiation field present during the epoch of reionization. We consider emission from both galaxies and the intergalactic medium (IGM) and take into account the main contributions to the production of Ly{alpha} photons: recombinations, collisions, continuum emission from galaxies, and scattering of Lyn photons in the IGM. We find that the emission from individual galaxies dominates over the IGM with a total Ly{alpha} intensity (times frequency) of about (1.43-3.57) Multiplication-Sign 10{sup -8} erg s{sup -1} cm{sup -2} sr{sup -1} at a redshift of 7. This intensity level is low, so it is unlikely that the Ly{alpha} background during reionization can be established by an experiment aiming at an absolute background light measurement. Instead, we consider Ly{alpha} intensity mapping with the aim of measuring the anisotropy power spectrum that has rms fluctuations at the level of 1 Multiplication-Sign 10{sup -16} [erg s{sup -1} cm{sup -2} sr{sup -1}]{sup 2} at a few Mpc scales. These anisotropies could be measured with a spectrometer at near-IR wavelengths from 0.9 to 1.4 {mu}m with fields in the order of 0.5 to 1 deg{sup 2}. We recommend that existing ground-based programs using narrowband filters also pursue intensity fluctuations to study statistics on the spatial distribution of faint Ly{alpha} emitters. We also discuss the cross-correlation signal with 21 cm experiments that probe H I in the IGM during reionization. A dedicated sub-orbital or space-based Ly{alpha} intensity mapping experiment could provide a viable complimentary approach to probe reionization, when compared to 21 cm experiments, and is likely within experimental reach.

  16. Intensity Mapping of Lyα Emission during the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Silva, Marta B.; Santos, Mario G.; Gong, Yan; Cooray, Asantha; Bock, James

    2013-02-01

    We calculate the absolute intensity and anisotropies of the Lyα radiation field present during the epoch of reionization. We consider emission from both galaxies and the intergalactic medium (IGM) and take into account the main contributions to the production of Lyα photons: recombinations, collisions, continuum emission from galaxies, and scattering of Lyn photons in the IGM. We find that the emission from individual galaxies dominates over the IGM with a total Lyα intensity (times frequency) of about (1.43-3.57) × 10-8 erg s-1 cm-2 sr-1 at a redshift of 7. This intensity level is low, so it is unlikely that the Lyα background during reionization can be established by an experiment aiming at an absolute background light measurement. Instead, we consider Lyα intensity mapping with the aim of measuring the anisotropy power spectrum that has rms fluctuations at the level of 1 × 10-16 [erg s-1 cm-2 sr-1]2 at a few Mpc scales. These anisotropies could be measured with a spectrometer at near-IR wavelengths from 0.9 to 1.4 μm with fields in the order of 0.5 to 1 deg2. We recommend that existing ground-based programs using narrowband filters also pursue intensity fluctuations to study statistics on the spatial distribution of faint Lyα emitters. We also discuss the cross-correlation signal with 21 cm experiments that probe H I in the IGM during reionization. A dedicated sub-orbital or space-based Lyα intensity mapping experiment could provide a viable complimentary approach to probe reionization, when compared to 21 cm experiments, and is likely within experimental reach.

  17. Distinctive rings in the 21 cm signal of the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Vonlanthen, P.; Semelin, B.; Baek, S.; Revaz, Y.

    2011-08-01

    Context. It is predicted that sources emitting UV radiation in the Lyman band during the epoch of reionization show a series of discontinuities in their Lyα flux radial profile as a consequence of the thickness of the Lyman-series lines in the primeval intergalactic medium. Through unsaturated Wouthuysen-Field coupling, these spherical discontinuities are also present in the 21 cm emission of the neutral IGM. Aims: We study the effects that these discontinuities have on the differential brightness temperature of the 21 cm signal of neutral hydrogen in a realistic setting that includes all other sources of fluctuations. We focus on the early phases of the epoch of reionization, and we address the question of the detectability by the planned Square Kilometre Array (SKA). Such a detection would be of great interest because these structures could provide an unambiguous diagnostic tool for the cosmological origin of the signal that remains after the foreground cleaning procedure. These structures could also be used as a new type of standard rulers. Methods: We determine the differential brightness temperature of the 21 cm signal in the presence of inhomogeneous Wouthuysen-Field effect using simulations that include (hydro)dynamics as well as ionizing and Lyman lines 3D radiative transfer with the code LICORICE. We include radiative transfer for the higher-order Lyman-series lines and consider also the effect of backreaction from recoils and spin diffusivity on the Lyα resonance. Results: We find that the Lyman horizons are difficult to indentify using the power spectrum of the 21 cm signal but are clearly visible in the maps and radial profiles around the first sources of our simulations, if only for a limited time interval, typically Δz ≈ 2 at z ~ 13. Stacking the profiles of the different sources of the simulation at a given redshift results in extending this interval to Δz ≈ 4. When we take into account the implementation and design planned for the SKA

  18. Tracing the Milky Way Nuclear Wind with 21cm Atomic Hydrogen Emission

    NASA Astrophysics Data System (ADS)

    Lockman, Felix J.; McClure-Griffiths, N. M.

    2016-08-01

    There is evidence in 21 cm H i emission for voids several kiloparsecs in size centered approximately on the Galactic center, both above and below the Galactic plane. These appear to map the boundaries of the Galactic nuclear wind. An analysis of H i at the tangent points, where the distance to the gas can be estimated with reasonable accuracy, shows a sharp transition at Galactic radii R ≲ 2.4 kpc from the extended neutral gas layer characteristic of much of the Galactic disk, to a thin Gaussian layer with FWHM ∼ 125 pc. An anti-correlation between H i and γ-ray emission at latitudes 10^\\circ ≤slant | b| ≤slant 20^\\circ suggests that the boundary of the extended H i layer marks the walls of the Fermi Bubbles. With H i, we are able to trace the edges of the voids from | z| \\gt 2 {{kpc}} down to z ≈ 0, where they have a radius ∼2 kpc. The extended Hi layer likely results from star formation in the disk, which is limited largely to R ≳ 3 kpc, so the wind may be expanding into an area of relatively little H i. Because the H i kinematics can discriminate between gas in the Galactic center and foreground material, 21 cm H i emission may be the best probe of the extent of the nuclear wind near the Galactic plane.

  19. 21 cm Fluctuations of the Cosmic Dawn with the Owens Valley Long Wavelength Array

    NASA Astrophysics Data System (ADS)

    Eastwood, Michael; Hallinan, Gregg; Owens Valley LWA Collaboration

    2016-01-01

    The Owens Valley Long Wavelength Array (OVRO LWA) is a 288-antenna interferometer covering 30 to 80 MHz located at the Owens Valley Radio Observatory (OVRO) near Big Pine, California. I am leading the effort to detect spatial fluctuations of the 21 cm transition from the cosmic dawn (z~20) with the OVRO LWA. These spatial fluctuations are primarily sourced by inhomogeneous X-ray heating from early star formation. The spectral hardness of early X-ray sources, stellar feedback mechanisms, and baryon streaming therefore all play a role in shaping the power spectrum. I will present the application of m-mode analysis (Shaw et al. 2014, Shaw et al. 2015) to OVRO LWA data to: 1. compress the data set, 2. create maps of the northern sky that can be fed back into the calibration pipeline, and 3. filter foreground emission. Finally I will present the current status and future prospects of the OVRO LWA for detecting the 21 cm power spectrum at z~20.

  20. Tracing the Milky Way Nuclear Wind with 21cm Atomic Hydrogen Emission

    NASA Astrophysics Data System (ADS)

    Lockman, Felix J.; McClure-Griffiths, N. M.

    2016-08-01

    There is evidence in 21 cm H i emission for voids several kiloparsecs in size centered approximately on the Galactic center, both above and below the Galactic plane. These appear to map the boundaries of the Galactic nuclear wind. An analysis of H i at the tangent points, where the distance to the gas can be estimated with reasonable accuracy, shows a sharp transition at Galactic radii R ≲ 2.4 kpc from the extended neutral gas layer characteristic of much of the Galactic disk, to a thin Gaussian layer with FWHM ˜ 125 pc. An anti-correlation between H i and γ-ray emission at latitudes 10^\\circ ≤slant | b| ≤slant 20^\\circ suggests that the boundary of the extended H i layer marks the walls of the Fermi Bubbles. With H i, we are able to trace the edges of the voids from | z| \\gt 2 {{kpc}} down to z ≈ 0, where they have a radius ˜2 kpc. The extended Hi layer likely results from star formation in the disk, which is limited largely to R ≳ 3 kpc, so the wind may be expanding into an area of relatively little H i. Because the H i kinematics can discriminate between gas in the Galactic center and foreground material, 21 cm H i emission may be the best probe of the extent of the nuclear wind near the Galactic plane.

  1. Power spectrum extraction for redshifted 21-cm Epoch of Reionization experiments: the LOFAR case

    NASA Astrophysics Data System (ADS)

    Harker, Geraint; Zaroubi, Saleem; Bernardi, Gianni; Brentjens, Michiel A.; de Bruyn, A. G.; Ciardi, Benedetta; Jelić, Vibor; Koopmans, Leon V. E.; Labropoulos, Panagiotis; Mellema, Garrelt; Offringa, André; Pandey, V. N.; Pawlik, Andreas H.; Schaye, Joop; Thomas, Rajat M.; Yatawatta, Sarod

    2010-07-01

    One of the aims of the Low Frequency Array (LOFAR) Epoch of Reionization (EoR) project is to measure the power spectrum of variations in the intensity of redshifted 21-cm radiation from the EoR. The sensitivity with which this power spectrum can be estimated depends on the level of thermal noise and sample variance, and also on the systematic errors arising from the extraction process, in particular from the subtraction of foreground contamination. We model the extraction process using realistic simulations of the cosmological signal, the foregrounds and noise, and so estimate the sensitivity of the LOFAR EoR experiment to the redshifted 21-cm power spectrum. Detection of emission from the EoR should be possible within 360 h of observation with a single station beam. Integrating for longer, and synthesizing multiple station beams within the primary (tile) beam, then enables us to extract progressively more accurate estimates of the power at a greater range of scales and redshifts. We discuss different observational strategies which compromise between depth of observation, sky coverage and frequency coverage. A plan in which lower frequencies receive a larger fraction of the time appears to be promising. We also study the nature of the bias which foreground fitting errors induce on the inferred power spectrum and discuss how to reduce and correct for this bias. The angular and line-of-sight power spectra have different merits in this respect, and we suggest considering them separately in the analysis of LOFAR data.

  2. Dicke’s Superradiance in Astrophysics. I. The 21 cm Line

    NASA Astrophysics Data System (ADS)

    Rajabi, Fereshteh; Houde, Martin

    2016-08-01

    We have applied the concept of superradiance introduced by Dicke in 1954 to astrophysics by extending the corresponding analysis to the magnetic dipole interaction characterizing the atomic hydrogen 21 cm line. Although it is unlikely that superradiance could take place in thermally relaxed regions and that the lack of observational evidence of masers for this transition reduces the probability of detecting superradiance, in situations where the conditions necessary for superradiance are met (close atomic spacing, high velocity coherence, population inversion, and long dephasing timescales compared to those related to coherent behavior), our results suggest that relatively low levels of population inversion over short astronomical length-scales (e.g., as compared to those required for maser amplification) can lead to the cooperative behavior required for superradiance in the interstellar medium. Given the results of our analysis, we expect the observational properties of 21 cm superradiance to be characterized by the emission of high-intensity, spatially compact, burst-like features potentially taking place over short periods ranging from minutes to days.

  3. Redshift-space distortion of the 21-cm background from the epoch of reionization - I. Methodology re-examined

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Shapiro, Paul R.; Mellema, Garrelt; Iliev, Ilian T.; Koda, Jun; Ahn, Kyungjin

    2012-05-01

    .1-1 h Mpc-1, when strong ionization fluctuations exist (e.g. at the 50 per cent ionized epoch). We derive an alternative, quasi-linear formulation which improves upon the accuracy of the linear theory. (7) We describe and test two numerical schemes to calculate the 21-cm signal from reionization simulations to incorporate peculiar velocity effects in the optically thin approximation accurately, by real- to redshift-space re-mapping of the H I density. One is particle based, the other grid based, and while the former is most accurate, we demonstrate that the latter is computationally more efficient and can be optimized so as to achieve sufficient accuracy.

  4. Modeling the neutral hydrogen distribution in the post-reionization Universe: intensity mapping

    SciTech Connect

    Villaescusa-Navarro, Francisco; Viel, Matteo; Datta, Kanan K.; Choudhury, T. Roy E-mail: viel@oats.inaf.it E-mail: tirth@ncra.tifr.res.in

    2014-09-01

    We model the distribution of neutral hydrogen (HI) in the post-reionization era and investigate its detectability in 21 cm intensity mapping with future radio telescopes like the Square Kilometer array (SKA). We rely on high resolution hydrodynamical N-body simulations that have a state-of-the-art treatment of the low density photoionized gas in the inter-galactic medium (IGM). The HI is assigned a-posteriori to the gas particles following two different approaches: a halo-based method in which HI is assigned only to gas particles residing within dark matter halos; a particle-based method that assigns HI to all gas particles using a prescription based on the physical properties of the particles. The HI statistical properties are then compared to the observational properties of Damped Lyman-α Absorbers (DLAs) and of lower column density systems and reasonable good agreement is found for all the cases. Among the halo-based method, we further consider two different schemes that aim at reproducing the observed properties of DLAs by distributing HI inside halos: one of this results in a much higher bias for DLAs, in agreement with recent observations, which boosts the 21 cm power spectrum by a factor ∼ 4 with respect to the other recipe. Furthermore, we quantify the contribution of HI in the diffuse IGM to both Ω{sub HI} and the HI power spectrum finding to be subdominant in both cases. We compute the 21 cm power spectrum from the simulated HI distribution and calculate the expected signal for both SKA1-mid and SKA1-low configurations at 2.4 ≤ z ≤ 4. We find that SKA will be able to detect the 21 cm power spectrum, in the non-linear regime, up to k ∼ 1 h/Mpc for SKA1-mid and k ∼ 5 h/Mpc for SKA1-low with 100 hours of observations. We also investigate the perspective of imaging the HI distribution. Our findings indicate that SKA1-low could detect the most massive HI peaks with a signal to noise ratio (SNR) higher than 5 for an observation time of about 1000

  5. The Murchison Widefield Array 21cm Epoch of Reionization Experiment: Design, Construction, and First Season Results

    NASA Astrophysics Data System (ADS)

    Beardsley, Adam

    The Cosmic Dark Ages and the Epoch of Reionization (EoR) remain largely unexplored chapters in the history and evolution of the Universe. These periods hold the potential to inform our picture of the cosmos similar to what the Cosmic Microwave Background has done over the past several decades. A promising method to probe the neutral hydrogen gas between early galaxies is known as 21cm tomography, which utilizes the ubiquitous hyper-fine transition of HI to create 3D maps of the intergalactic medium. The Murchison Widefield Array (MWA) is an instrument built with a primary science driver to detect and characterize the EoR through 21cm tomography. In this thesis we explore the challenges faced by the MWA from the layout of antennas, to a custom analysis pipeline, to bridging the gap with probes at other wavelengths. We discuss many lessons learned in the course of reducing MWA data with an extremely precise measurement in mind, and conclude with the first deep integration from array. We present a 2-σ upper limit on the EoR power spectrum of Δ^2(k)<1.25×10^4 mK^2 at cosmic scale k=0.236 h Mpc^{-1} and redshift z=6.8. Our result is a marginal improvement over previous MWA results and consistent with the best published limits from other instruments. This result is the deepest imaging power spectrum to date, and is a major step forward for this type of analysis. While our limit is dominated by systematics, we offer strategies for improvement for future analysis.

  6. The Murchison Widefield Array 21cm Epoch of Reionization Experiment: Design, Construction, and First Season Results

    NASA Astrophysics Data System (ADS)

    Beardsley, Adam

    The Cosmic Dark Ages and the Epoch of Reionization (EoR) remain largely unexplored chapters in the history and evolution of the Universe. These periods hold the potential to inform our picture of the cosmos similar to what the Cosmic Microwave Background has done over the past several decades. A promising method to probe the neutral hydrogen gas between early galaxies is known as 21cm tomography, which utilizes the ubiquitous hyper-fine transition of HI to create 3D maps of the intergalactic medium. The Murchison Widefield Array (MWA) is an instrument built with a primary science driver to detect and characterize the EoR through 21cm tomography. In this thesis we explore the challenges faced by the MWA from the layout of antennas, to a custom analysis pipeline, to bridging the gap with probes at other wavelengths. We discuss many lessons learned in the course of reducing MWA data with an extremely precise measurement in mind, and conclude with the first deep integration from array. We present a 2-sigma upper limit on the EoR power spectrum of Delta2(k) < 1.25 x 104 mK2 at cosmic scale k = 0.236 h Mpc-1 and redshift z = 6.8. Our result is a marginal improvement over previous MWA results and consistent with the best published limits from other instruments. This result is the deepest imaging power spectrum to date, and is a major step forward for this type of analysis. While our limit is dominated by systematics, we offer strategies for improvement for future analysis.

  7. Optical mapping at increased illumination intensities

    PubMed Central

    Kanaporis, Giedrius; Martišienė, Irma; Vosyliūtė, Rūta; Navalinskas, Antanas; Treinys, Rimantas; Matiukas, Arvydas; Pertsov, Arkady M.

    2012-01-01

    Abstract. Voltage-sensitive fluorescent dyes have become a major tool in cardiac and neuro-electrophysiology. Achieving high signal-to-noise ratios requires increased illumination intensities, which may cause photobleaching and phototoxicity. The optimal range of illumination intensities varies for different dyes and must be evaluated individually. We evaluate two dyes: di-4-ANBDQBS (excitation 660 nm) and di-4-ANEPPS (excitation 532 nm) in the guinea pig heart. The light intensity varies from 0.1 to 5  mW/mm2, with the upper limit at 5 to 10 times above values reported in the literature. The duration of illumination was 60 s, which in guinea pigs corresponds to 300 beats at a normal heart rate. Within the identified duration and intensity range, neither dye shows significant photobleaching or detectable phototoxic effects. However, light absorption at higher intensities causes noticeable tissue heating, which affects the electrophysiological parameters. The most pronounced effect is a shortening of the action potential duration, which, in the case of 532-nm excitation, can reach ∼30%. At 660-nm excitation, the effect is ∼10%. These findings may have important implications for the design of optical mapping protocols in biomedical applications. PMID:23085908

  8. Optical mapping at increased illumination intensities

    NASA Astrophysics Data System (ADS)

    Kanaporis, Giedrius; Martišienė, Irma; Jurevičius, Jonas; Vosyliūtė, Rūta; Navalinskas, Antanas; Treinys, Rimantas; Matiukas, Arvydas; Pertsov, Arkady M.

    2012-09-01

    Voltage-sensitive fluorescent dyes have become a major tool in cardiac and neuro-electrophysiology. Achieving high signal-to-noise ratios requires increased illumination intensities, which may cause photobleaching and phototoxicity. The optimal range of illumination intensities varies for different dyes and must be evaluated individually. We evaluate two dyes: di-4-ANBDQBS (excitation 660 nm) and di-4-ANEPPS (excitation 532 nm) in the guinea pig heart. The light intensity varies from 0.1 to 5 mW/mm2, with the upper limit at 5 to 10 times above values reported in the literature. The duration of illumination was 60 s, which in guinea pigs corresponds to 300 beats at a normal heart rate. Within the identified duration and intensity range, neither dye shows significant photobleaching or detectable phototoxic effects. However, light absorption at higher intensities causes noticeable tissue heating, which affects the electrophysiological parameters. The most pronounced effect is a shortening of the action potential duration, which, in the case of 532-nm excitation, can reach ˜30%. At 660-nm excitation, the effect is ˜10%. These findings may have important implications for the design of optical mapping protocols in biomedical applications.

  9. High redshift signatures in the 21 cm forest due to cosmic string wakes

    NASA Astrophysics Data System (ADS)

    Tashiro, Hiroyuki; Sekiguchi, Toyokazu; Silk, Joseph

    2014-01-01

    Cosmic strings induce minihalo formation in the early universe. The resultant minihalos cluster in string wakes and create a ``21 cm forest'' against the cosmic microwave background (CMB) spectrum. Such a 21 cm forest can contribute to angular fluctuations of redshifted 21 cm signals integrated along the line of sight. We calculate the root-mean-square amplitude of the 21 cm fluctuations due to strings and show that these fluctuations can dominate signals from minihalos due to primordial density fluctuations at high redshift (zgtrsim10), even if the string tension is below the current upper bound, Gμ < 1.5 × 10-7. Our results also predict that the Square Kilometre Array (SKA) can potentially detect the 21 cm fluctuations due to strings with Gμ ≈ 7.5 × 10-8 for the single frequency band case and 4.0 × 10-8 for the multi-frequency band case.

  10. High redshift signatures in the 21 cm forest due to cosmic string wakes

    SciTech Connect

    Tashiro, Hiroyuki; Sekiguchi, Toyokazu; Silk, Joseph E-mail: toyokazu.sekiguchi@nagoya-u.jp

    2014-01-01

    Cosmic strings induce minihalo formation in the early universe. The resultant minihalos cluster in string wakes and create a ''21 cm forest'' against the cosmic microwave background (CMB) spectrum. Such a 21 cm forest can contribute to angular fluctuations of redshifted 21 cm signals integrated along the line of sight. We calculate the root-mean-square amplitude of the 21 cm fluctuations due to strings and show that these fluctuations can dominate signals from minihalos due to primordial density fluctuations at high redshift (z∼>10), even if the string tension is below the current upper bound, Gμ < 1.5 × 10{sup −7}. Our results also predict that the Square Kilometre Array (SKA) can potentially detect the 21 cm fluctuations due to strings with Gμ ≈ 7.5 × 10{sup −8} for the single frequency band case and 4.0 × 10{sup −8} for the multi-frequency band case.

  11. 21 cm line bispectrum as a method to probe cosmic dawn and epoch of reionization

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; Yoshiura, Shintaro; Takahashi, Keitaro; Yokoyama, Shuichiro; Ichiki, Kiyotomo

    2016-05-01

    Redshifted 21 cm signal is a promising tool to investigate the state of intergalactic medium (IGM) in the cosmic dawn (CD) and epoch of reionization (EoR). In our previous work, we studied the variance and skewness of the 21 cm fluctuations to give a clear interpretation of the 21 cm power spectrum and found that skewness is a good indicator of the epoch when X-ray heating becomes effective. Thus, the non-Gaussian feature of the spatial distribution of the 21 cm signal is expected to be useful to investigate the astrophysical effects in the CD and EoR. In this paper, in order to investigate such a non-Gaussian feature in more detail, we focus on the bispectrum of the 21 cm signal. It is expected that the 21 cm brightness temperature bispectrum is produced by non-Gaussianity due to the various astrophysical effects such as the Wouthuysen-Field effect, X-ray heating and reionization. We study the various properties of 21 cm bispectrum such as scale dependence, shape dependence and redshift evolution. And also we study the contribution from each component of 21 cm bispectrum. We find that the contribution from each component has characteristic scale-dependent feature. In particular, we find that the bulk of the 21 cm bispectrum at z = 20 comes from the matter fluctuations, while in other epochs it is mainly determined by the spin and/or neutral fraction fluctuations and it is expected that we could obtain more detailed information on the IGM in the CD and EoR by using the 21 cm bispectrum in the future experiments, combined with the power spectrum and skewness.

  12. The effect of foreground subtraction on cosmological measurements from intensity mapping

    NASA Astrophysics Data System (ADS)

    Wolz, L.; Abdalla, F. B.; Blake, C.; Shaw, J. R.; Chapman, E.; Rawlings, S.

    2014-07-01

    We model a 21-cm intensity mapping survey in the redshift range 0.01 < z < 1.5 designed to simulate the skies as seen by future radio telescopes such as the Square Kilometre Array, including instrumental noise and Galactic foregrounds. In our pipeline, we remove the Galactic foregrounds with a fast independent component analysis technique. We present the power spectrum of the large-scale matter distribution, C(ℓ), before and after the application of this foreground removal method and calculate the systematic errors. Our simulations show a certain level of bias remains in the power spectrum at all scales ℓ < 400. At large-scales ℓ < 30 this bias is particularly significant. We measure the impact of these systematics in two ways: first we fit cosmological parameters to the broad-band shape of the C(ℓ) where we find that the best fit is significantly shifted at the 2-3σ level depending on masking and noise levels. However, secondly, we recover cosmic distances without biases at all simulated redshifts by fitting the baryon acoustic oscillations in the C(ℓ). We conclude that further advances in foreground removal are needed in order to recover unbiased information from the broad-band shape of the C(ℓ), however, intensity mapping experiments will be a powerful tool for mapping cosmic distances across a wide redshift range.

  13. Inferring the distances of fast radio bursts through associated 21-cm absorption

    NASA Astrophysics Data System (ADS)

    Margalit, Ben; Loeb, Abraham

    2016-07-01

    The distances of fast radio burst (FRB) sources are currently unknown. We show that the 21-cm absorption line of hydrogen can be used to infer the redshifts of FRB sources, and determine whether they are Galactic or extragalactic. We calculate a probability of ˜10 per cent for the host galaxy of an FRB to exhibit a 21-cm absorption feature of equivalent width ≳10 km s-1. Arecibo, along with several future radio observatories, should be capable of detecting such associated 21-cm absorption signals for strong bursts of ≳several Jy peak flux densities.

  14. Probing reionization with the cross-power spectrum of 21 cm and near-infrared radiation backgrounds

    SciTech Connect

    Mao, Xiao-Chun

    2014-08-01

    The cross-correlation between the 21 cm emission from the high-redshift intergalactic medium and the near-infrared (NIR) background light from high-redshift galaxies promises to be a powerful probe of cosmic reionization. In this paper, we investigate the cross-power spectrum during the epoch of reionization. We employ an improved halo approach to derive the distribution of the density field and consider two stellar populations in the star formation model: metal-free stars and metal-poor stars. The reionization history is further generated to be consistent with the electron-scattering optical depth from cosmic microwave background measurements. Then, the intensity of the NIR background is estimated by collecting emission from stars in first-light galaxies. On large scales, we find that the 21 cm and NIR radiation backgrounds are positively correlated during the very early stages of reionization. However, these two radiation backgrounds quickly become anti-correlated as reionization proceeds. The maximum absolute value of the cross-power spectrum is |Δ{sub 21,NIR}{sup 2}|∼10{sup −4} mK nW m{sup –2} sr{sup –1}, reached at ℓ ∼ 1000 when the mean fraction of ionized hydrogen is x-bar{sub i}∼0.9. We find that Square Kilometer Array can measure the 21 cm-NIR cross-power spectrum in conjunction with mild extensions to the existing CIBER survey, provided that the integration time independently adds up to 1000 and 1 hr for 21 cm and NIR observations, and that the sky coverage fraction of the CIBER survey is extended from 4 × 10{sup –4} to 0.1. Measuring the cross-correlation signal as a function of redshift provides valuable information on reionization and helps confirm the origin of the 'missing' NIR background.

  15. MITEoR: a scalable interferometer for precision 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Buza, V.; Dillon, J. S.; Gharibyan, H.; Hickish, J.; Kunz, E.; Liu, A.; Losh, J.; Lutomirski, A.; Morrison, S.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Tribiano, S. M.; Valdez, M.; Yang, H.; Adami, K. Zarb; Zelko, I.; Zheng, K.; Armstrong, R. P.; Bradley, R. F.; Dexter, M. R.; Ewall-Wice, A.; Magro, A.; Matejek, M.; Morgan, E.; Neben, A. R.; Pan, Q.; Penna, R. F.; Peterson, C. M.; Su, M.; Villasenor, J.; Williams, C. L.; Zhu, Y.

    2014-12-01

    We report on the MIT Epoch of Reionization (MITEoR) experiment, a pathfinder low-frequency radio interferometer whose goal is to test technologies that improve the calibration precision and reduce the cost of the high-sensitivity 3D mapping required for 21 cm cosmology. MITEoR accomplishes this by using massive baseline redundancy, which enables both automated precision calibration and correlator cost reduction. We demonstrate and quantify the power and robustness of redundancy for scalability and precision. We find that the calibration parameters precisely describe the effect of the instrument upon our measurements, allowing us to form a model that is consistent with χ2 per degree of freedom <1.2 for as much as 80 per cent of the observations. We use these results to develop an optimal estimator of calibration parameters using Wiener filtering, and explore the question of how often and how finely in frequency visibilities must be reliably measured to solve for calibration coefficients. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious Hydrogen Epoch of Reionization Array project and other next-generation instruments, which would incorporate many identical or similar technologies.

  16. The imprint of warm dark matter on the cosmological 21-cm signal

    NASA Astrophysics Data System (ADS)

    Sitwell, Michael; Mesinger, Andrei; Ma, Yin-Zhe; Sigurdson, Kris

    2014-03-01

    We investigate the effects of warm dark matter (WDM) on the cosmic 21-cm signal. If dark matter exists as WDM instead of cold dark matter (CDM), its non-negligible velocities can inhibit the formation of low-mass haloes that normally form first in CDM models, therefore delaying star formation. The absence of early sources delays the build-up of UV and X-ray backgrounds that affect the 21-cm radiation signal produced by neutral hydrogen. With use of the 21CMFAST code, we demonstrate that the pre-reionization 21-cm signal can be changed significantly in WDM models with a free-streaming length equivalent to that of a thermal relic with mass mX of up to ˜10-20 keV. In such a WDM cosmology, the 21-cm signal traces the growth of more massive haloes, resulting in a delay of the 21-cm absorption signature and followed by accelerated X-ray heating. CDM models where astrophysical sources have a suppressed photon-production efficiency can delay the 21-cm signal as well, although its subsequent evolution is not as rapid as compared to WDM. This motivates using the gradient of the global 21-cm signal to differentiate between some CDM and WDM models. Finally, we show that the degeneracy between the astrophysics and mX can be broken with the 21-cm power spectrum, as WDM models should have a bias-induced excess of power on large scales. This boost in power should be detectable with current interferometers for models with mX ≲ 3 keV, while next-generation instruments will easily be able to measure this difference for all relevant WDM models.

  17. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Technical Reports Server (NTRS)

    Switzer, Eric R.; Liu, Adrian

    2014-01-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z approx. 6 - 30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal, and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line-of-sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least approx. 10(exp -4). In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the approx. 10(exp -2) level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission. Subject headings: dark ages, reionization, first stars - methods: data analysis - methods: statistical

  18. Erasing the variable: empirical foreground discovery for global 21 cm spectrum experiments

    SciTech Connect

    Switzer, Eric R.; Liu, Adrian

    2014-10-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z ∼ 6-30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line of sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least ∼10{sup –4}. In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the ∼10{sup –2} level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission.

  19. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    PubMed

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating. PMID:25815921

  20. A correlation between the H I 21-cm absorption strength and impact parameter in external galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Reeves, S. N.; Allison, J. R.; Sadler, E. M.

    2016-04-01

    By combining the data from surveys for H I 21-cm absorption at various impact parameters in near-by galaxies, we report an anti-correlation between the 21-cm absorption strength (velocity integrated optical depth) and the impact parameter. Also, by combining the 21-cm absorption strength with that of the emission, giving the neutral hydrogen column density, N_{H I}, we find no evidence that the spin temperature of the gas (degenerate with the covering factor) varies significantly across the disk. This is consistent with the uniformity of spin temperature measured across the Galactic disk. Furthermore, comparison with the Galactic N_{H I} distribution suggests that intervening 21-cm absorption preferentially arises in disks of high inclinations (near face-on). We also investigate the hypothesis that 21-cm absorption is favourably detected towards compact radio sources. Although there is insufficient data to determine whether there is a higher detection rate towards quasar, rather than radio galaxy, sight-lines, the 21-cm detections intervene objects with a mean turnover frequency of <ν _{_TO}>≈ 5× 108 Hz, compared to <ν _{_TO}>≈ 1× 108 Hz for the non-detections. Since the turnover frequency is anti-correlated with radio source size, this does indicate a preferential bias for detection towards compact background radio sources.

  1. A correlation between the H I 21-cm absorption strength and impact parameter in external galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Reeves, S. N.; Allison, J. R.; Sadler, E. M.

    2016-07-01

    By combining the data from surveys for H I 21-cm absorption at various impact parameters in near-by galaxies, we report an anti-correlation between the 21-cm absorption strength (velocity integrated optical depth) and the impact parameter. Also, by combining the 21-cm absorption strength with that of the emission, giving the neutral hydrogen column density, N_{H I}, we find no evidence that the spin temperature of the gas (degenerate with the covering factor) varies significantly across the disc. This is consistent with the uniformity of spin temperature measured across the Galactic disc. Furthermore, comparison with the Galactic N_{H I} distribution suggests that intervening 21-cm absorption preferentially arises in discs of high inclinations (near face-on). We also investigate the hypothesis that 21-cm absorption is favourably detected towards compact radio sources. Although there is insufficient data to determine whether there is a higher detection rate towards quasar, rather than radio galaxy, sight-lines, the 21-cm detections intervene objects with a mean turnover frequency of < ν _{_TO}rangle ≈ 5× 108 Hz, compared to < ν _{_TO}rangle ≈ 1× 108 Hz for the non-detections. Since the turnover frequency is anti-correlated with radio source size, this does indicate a preferential bias for detection towards compact background radio sources.

  2. Constraining cosmology and ionization history with combined 21 cm power spectrum and global signal measurements

    NASA Astrophysics Data System (ADS)

    Liu, Adrian; Parsons, Aaron R.

    2016-04-01

    Improvements in current instruments and the advent of next-generation instruments will soon push observational 21 cm cosmology into a new era, with high significance measurements of both the power spectrum and the mean (`global') signal of the 21 cm brightness temperature. In this paper, we use the recently commenced Hydrogen Epoch of Reionization Array (HERA) as a worked example to provide forecasts on astrophysical and cosmological parameter constraints. In doing so, we improve upon previous forecasts in a number of ways. First, we provide updated forecasts using the latest best-fitting cosmological parameters from the Planck satellite, exploring the impact of different Planck data sets on 21 cm experiments. We also show that despite the exquisite constraints that other probes have placed on cosmological parameters, the remaining uncertainties are still large enough to have a non-negligible impact on upcoming 21 cm data analyses. While this complicates high-precision constraints on reionization models, it provides an avenue for 21 cm reionization measurements to constrain cosmology. We additionally forecast HERA's ability to measure the ionization history using a combination of power spectrum measurements and semi-analytic simulations. Finally, we consider ways in which 21 cm global signal and power spectrum measurements can be combined, and propose a method by which power spectrum results can be used to train a compact parametrization of the global signal. This parametrization reduces the number of parameters needed to describe the global signal, increasing the likelihood of a high significance measurement.

  3. Reconstructing the Nature of the First Cosmic Sources from the Anisotropic 21-cm Signal

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-01

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z ˜15 ), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  4. Distinctive 21-cm structures of the first stars, galaxies and quasars

    NASA Astrophysics Data System (ADS)

    Yajima, Hidenobu; Li, Yuexing

    2014-12-01

    Observations of the redshifted 21-cm line with forthcoming radio telescopes promise to transform our understanding of the cosmic reionization. To unravel the underlying physical process, we investigate the 21-cm structures of three different ionizing sources - Population (Pop) III stars, the first galaxies and the first quasars - by using radiative transfer simulations that include both ionization of neutral hydrogen and resonant scattering of Lyα photons. We find that Pop III stars and quasars produce a smooth transition from an ionized and hot state to a neutral and cold state, because of their hard spectral energy distribution with abundant ionizing photons, in contrast to the sharp transition in galaxies. Furthermore, Lyα scattering plays a dominant role in producing the 21-cm signal because it determines the relation between hydrogen spin temperature and gas kinetic temperature. This effect, also called Wouthuysen-Field coupling, depends strongly on the ionizing source. It is strongest around galaxies, where the spin temperature is highly coupled to that of the gas, resulting in extended absorption troughs in the 21-cm brightness temperature. However, in the case of Pop III stars, the 21-cm signal shows both emission and absorption regions around a small H II bubble. For quasars, a large emission region in the 21-cm signal is produced, and the absorption region decreases as the size of the H II bubble becomes large due to the limited travelling time of photons. We predict that future surveys from large radio arrays, such as the Murchison Widefield Array, the Low Frequency Array and the Square Kilometre Array, might be able to detect the 21-cm signals of primordial galaxies and quasars, but possibly not those of Pop III stars, because of their small angular diameters.

  5. Unveiling the nature of dark matter with high redshift 21 cm line experiments

    SciTech Connect

    Evoli, C.; Mesinger, A.; Ferrara, A. E-mail: andrei.mesinger@sns.it

    2014-11-01

    Observations of the redshifted 21 cm line from neutral hydrogen will open a new window on the early Universe. By influencing the thermal and ionization history of the intergalactic medium (IGM), annihilating dark matter (DM) can leave a detectable imprint in the 21 cm signal. Building on the publicly available 21cmFAST code, we compute the 21 cm signal for a 10 GeV WIMP DM candidate. The most pronounced role of DM annihilations is in heating the IGM earlier and more uniformly than astrophysical sources of X-rays. This leaves several unambiguous, qualitative signatures in the redshift evolution of the large-scale (k ≅ 0.1 Mpc{sup -1}) 21 cm power amplitude: (i) the local maximum (peak) associated with IGM heating can be lower than the other maxima; (ii) the heating peak can occur while the IGM is in emission against the cosmic microwave background (CMB); (iii) there can be a dramatic drop in power (a global minimum) corresponding to the epoch when the IGM temperature is comparable to the CMB temperature. These signatures are robust to astrophysical uncertainties, and will be easily detectable with second generation interferometers. We also briefly show that decaying warm dark matter has a negligible role in heating the IGM.

  6. Canadian Hydrogen Intensity Mapping Experiment (CHIME) pathfinder

    NASA Astrophysics Data System (ADS)

    Bandura, Kevin; Addison, Graeme E.; Amiri, Mandana; Bond, J. Richard; Campbell-Wilson, Duncan; Connor, Liam; Cliche, Jean-François; Davis, Greg; Deng, Meiling; Denman, Nolan; Dobbs, Matt; Fandino, Mateus; Gibbs, Kenneth; Gilbert, Adam; Halpern, Mark; Hanna, David; Hincks, Adam D.; Hinshaw, Gary; Höfer, Carolin; Klages, Peter; Landecker, Tom L.; Masui, Kiyoshi; Mena Parra, Juan; Newburgh, Laura B.; Pen, Ue-li; Peterson, Jeffrey B.; Recnik, Andre; Shaw, J. Richard; Sigurdson, Kris; Sitwell, Mike; Smecher, Graeme; Smegal, Rick; Vanderlinde, Keith; Wiebe, Don

    2014-07-01

    A pathfinder version of CHIME (the Canadian Hydrogen Intensity Mapping Experiment) is currently being commissioned at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, BC. The instrument is a hybrid cylindrical interferometer designed to measure the large scale neutral hydrogen power spectrum across the redshift range 0.8 to 2.5. The power spectrum will be used to measure the baryon acoustic oscillation (BAO) scale across this poorly probed redshift range where dark energy becomes a significant contributor to the evolution of the Universe. The instrument revives the cylinder design in radio astronomy with a wide field survey as a primary goal. Modern low-noise amplifiers and digital processing remove the necessity for the analog beam forming that characterized previous designs. The Pathfinder consists of two cylinders 37m long by 20m wide oriented north-south for a total collecting area of 1,500 square meters. The cylinders are stationary with no moving parts, and form a transit instrument with an instantaneous field of view of ~100 degrees by 1-2 degrees. Each CHIME Pathfinder cylinder has a feedline with 64 dual polarization feeds placed every ~30 cm which Nyquist sample the north-south sky over much of the frequency band. The signals from each dual-polarization feed are independently amplified, filtered to 400-800 MHz, and directly sampled at 800 MSps using 8 bits. The correlator is an FX design, where the Fourier transform channelization is performed in FPGAs, which are interfaced to a set of GPUs that compute the correlation matrix. The CHIME Pathfinder is a 1/10th scale prototype version of CHIME and is designed to detect the BAO feature and constrain the distance-redshift relation. The lessons learned from its implementation will be used to inform and improve the final CHIME design.

  7. 21-cm Observations with the Morehead Radio Telescope: Involving Undergraduates in Observing Programs

    NASA Astrophysics Data System (ADS)

    Malphrus, B. K.; Combs, M. S.; Kruth, J.

    2000-12-01

    Herein we report astronomical observations made by undergraduate students with the Morehead Radio Telescope (MRT). The MRT, located at Morehead State University, Morehead, Kentucky, is small aperture (44-ft.) instrument designed by faculty, students, and industrial partners to provide a research instrument and active laboratory for undergraduate astronomy, physics, pre-engineering, and computer science students. Small aperture telescopes like the MRT have numerous advantages as active laboratories and as research instruments. The benefits to students are based upon a hands-on approach to learning concepts in astrophysics and engineering. Students are provided design and research challenges and are allowed to pursue their own solutions. Problem-solving abilities and research design skills are cultivated by this approach. Additionally, there are still contributions that small aperture centimeter-wave instruments can make. The MRT operates over a 6 MHz bandwidth centered at 1420 MHz (21-cm), which corresponds to the hyperfine transition of atomic hydrogen (HI). The HI spatial distribution and flux density associated with cosmic phenomena can be observed and mapped. The dynamics and kinematics of celestial objects can be investigated by observing over a range of frequencies (up to 2.5 MHz) with a 2048-channel back-end spectrometer, providing up to 1 KHz frequency resolution. The sensitivity and versatility of the telescope design facilitate investigation of a wide variety of cosmic phenomena, including supernova remnants, emission and planetary nebulae, extended HI emission from the Milky Way, quasars, radio galaxies, and the sun. Student observations of galactic sources herein reported include Taurus A, Cygnus X, and the Rosette Nebula. Additionally, we report observations of extragalactic phenomena, including Cygnus A, 3C 147, and 3C 146. These observations serve as a performance and capability test-bed of the MRT. In addition to the astronomical results of these

  8. 21-cm radiation: a new probe of variation in the fine-structure constant.

    PubMed

    Khatri, Rishi; Wandelt, Benjamin D

    2007-03-16

    We investigate the effect of variation in the value of the fine-structure constant (alpha) at high redshifts (recombination > z > 30) on the absorption of the cosmic microwave background (CMB) at 21 cm hyperfine transition of the neutral atomic hydrogen. We find that the 21 cm signal is very sensitive to the variations in alpha and it is so far the only probe of the fine-structure constant in this redshift range. A change in the value of alpha by 1% changes the mean brightness temperature decrement of the CMB due to 21 cm absorption by >5% over the redshift range z < 50. There is an effect of similar magnitude on the amplitude of the fluctuations in the brightness temperature. The redshift of maximum absorption also changes by approximately 5%. PMID:17501040

  9. Predictions for the 21 cm-galaxy cross-power spectrum observable with LOFAR and Subaru

    NASA Astrophysics Data System (ADS)

    Vrbanec, Dijana; Ciardi, Benedetta; Jelić, Vibor; Jensen, Hannes; Zaroubi, Saleem; Fernandez, Elizabeth R.; Ghosh, Abhik; Iliev, Ilian T.; Kakiichi, Koki; Koopmans, Léon V. E.; Mellema, Garrelt

    2016-03-01

    The 21 cm-galaxy cross-power spectrum is expected to be one of the promising probes of the Epoch of Reionization (EoR), as it could offer information about the progress of reionization and the typical scale of ionized regions at different redshifts. With upcoming observations of 21 cm emission from the EoR with the Low Frequency Array (LOFAR), and of high-redshift Ly α emitters with Subaru's Hyper Suprime-Cam (HSC), we investigate the observability of such cross-power spectrum with these two instruments, which are both planning to observe the ELAIS-N1 field at z = 6.6. In this paper, we use N-body + radiative transfer (both for continuum and Ly α photons) simulations at redshift 6.68, 7.06 and 7.3 to compute the 3D theoretical 21 cm-galaxy cross-power spectrum and cross-correlation function, as well as to predict the 2D 21 cm-galaxy cross-power spectrum and cross-correlation function expected to be observed by LOFAR and HSC. Once noise and projection effects are accounted for, our predictions of the 21 cm-galaxy cross-power spectrum show clear anti-correlation on scales larger than ˜60 h-1 Mpc (corresponding to k ˜ 0.1 h Mpc-1), with levels of significance p = 0.003 at z = 6.6 and p = 0.08 at z = 7.3. On smaller scales, instead, the signal is completely contaminated. On the other hand, our 21 cm-galaxy cross-correlation function is strongly contaminated by noise on all scales, since the noise is no longer being separated by its k modes.

  10. From Darkness to Light: Signatures of the Universe's First Galaxies in the Cosmic 21-cm Background

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    Within the first billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this Epoch of Reionization -- the emergence of the first stars, black holes, and full-fledged galaxies -- are expected to manifest as spectral "turning points" in the sky-averaged ("global") 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) required to model the signal. In this thesis, I make the first attempt to build the final piece of a global 21-cm data analysis pipeline: an inference tool capable of extracting the properties of the IGM and the Universe's first galaxies from the recovered signal. Such a framework is valuable even prior to a detection of the global 21-cm signal as it enables end-to-end simulations of 21-cm observations that can be used to optimize the design of upcoming instruments, their observing strategies, and their signal extraction algorithms. En route to a complete pipeline, I found that (1) robust limits on the physical properties of the IGM, such as its temperature and ionization state, can be derived analytically from the 21-cm turning points within two-zone models for the IGM, (2) improved constraints on the IGM properties can be obtained through simultaneous fitting of the global 21-cm signal and foregrounds, though biases can emerge depending on the parameterized form of the signal one adopts, (3) a simple four-parameter galaxy formation model can be constrained in only 100 hours of integration provided a stable instrumental response over a broad frequency range (~80 MHz), and (4) frequency-dependent RT solutions in physical models for the global 21-cm signal will be required to properly interpret the 21-cm absorption minimum, as the IGM thermal history is highly sensitive to the

  11. The Importance of Wide-field Foreground Removal for 21 cm Cosmology: A Demonstration with Early MWA Epoch of Reionization Observations

    NASA Astrophysics Data System (ADS)

    Pober, J. C.; Hazelton, B. J.; Beardsley, A. P.; Barry, N. A.; Martinot, Z. E.; Sullivan, I. S.; Morales, M. F.; Bell, M. E.; Bernardi, G.; Bhat, N. D. R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Deshpande, A. A.; Dillon, Joshua. S.; Emrich, D.; Ewall-Wice, A. M.; Feng, L.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, Han-Seek; Kittiwisit, P.; Kratzenberg, E.; Kudryavtseva, N.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, Sourabh; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Sethi, Shiv K.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wyithe, J. S. B.

    2016-03-01

    In this paper we present observations, simulations, and analysis demonstrating the direct connection between the location of foreground emission on the sky and its location in cosmological power spectra from interferometric redshifted 21 cm experiments. We begin with a heuristic formalism for understanding the mapping of sky coordinates into the cylindrically averaged power spectra measurements used by 21 cm experiments, with a focus on the effects of the instrument beam response and the associated sidelobes. We then demonstrate this mapping by analyzing power spectra with both simulated and observed data from the Murchison Widefield Array. We find that removing a foreground model that includes sources in both the main field of view and the first sidelobes reduces the contamination in high k∥ modes by several per cent relative to a model that only includes sources in the main field of view, with the completeness of the foreground model setting the principal limitation on the amount of power removed. While small, a percent-level amount of foreground power is in itself more than enough to prevent recovery of any Epoch of Reionization signal from these modes. This result demonstrates that foreground subtraction for redshifted 21 cm experiments is truly a wide-field problem, and algorithms and simulations must extend beyond the instrument’s main field of view to potentially recover the full 21 cm power spectrum.

  12. Reionization on Large Scales. IV. Predictions for the 21 cm Signal Incorporating the Light Cone Effect

    NASA Astrophysics Data System (ADS)

    La Plante, P.; Battaglia, N.; Natarajan, A.; Peterson, J. B.; Trac, H.; Cen, R.; Loeb, A.

    2014-07-01

    We present predictions for the 21 cm brightness temperature power spectrum during the Epoch of Reionization (EoR). We discuss the implications of the "light cone" effect, which incorporates evolution of the neutral hydrogen fraction and 21 cm brightness temperature along the line of sight. Using a novel method calibrated against radiation-hydrodynamic simulations, we model the neutral hydrogen density field and 21 cm signal in large volumes (L = 2 Gpc h -1). The inclusion of the light cone effect leads to a relative decrease of about 50% in the 21 cm power spectrum on all scales. We also find that the effect is more prominent at the midpoint of reionization and later. The light cone effect can also introduce an anisotropy along the line of sight. By decomposing the 3D power spectrum into components perpendicular to and along the line of sight, we find that in our fiducial reionization model, there is no significant anisotropy. However, parallel modes can contribute up to 40% more power for shorter reionization scenarios. The scales on which the light cone effect is relevant are comparable to scales where one measures the baryon acoustic oscillation. We argue that due to its large comoving scale and introduction of anisotropy, the light cone effect is important when considering redshift space distortions and future application to the Alcock-Paczyński test for the determination of cosmological parameters.

  13. New H I 21-cm absorbers at low and intermediate redshifts

    NASA Astrophysics Data System (ADS)

    Zwaan, M. A.; Liske, J.; Péroux, C.; Murphy, M. T.; Bouché, N.; Curran, S. J.; Biggs, A. D.

    2015-10-01

    We present the results of a survey for intervening H I 21-cm absorbers at intermediate and low redshift (0 < z < 1.2). For our total sample of 24 systems, we obtained high-quality data for 17 systems, the other seven being severely affected by radio frequency interference (RFI). Five of our targets are low-redshift (z < 0.17) optical galaxies with small impact parameters (<20 kpc) towards radio-bright background sources. Two of these were detected in 21-cm absorption, showing narrow, high optical depth absorption profiles, the narrowest having a velocity dispersion of only 1.5 km s- 1, which puts an upper limit on the kinetic temperature of Tk < 270 K. Combining our observations with results from the literature, we measure a weak anticorrelation between impact parameter and integral optical depth in local (z < 0.5) 21-cm absorbers. Of 11 Ca II and Mg II systems searched, two were detected in 21-cm absorption, and six were affected by RFI to a level that precludes a detection. For these two systems at z ˜ 0.6, we measure spin temperatures of Ts = (65 ± 17) K and Ts > 180 K. A subset of our systems was also searched for OH absorption, but no detections were made.

  14. Reionization on large scales. IV. Predictions for the 21 cm signal incorporating the light cone effect

    SciTech Connect

    La Plante, P.; Battaglia, N.; Natarajan, A.; Peterson, J. B.; Trac, H.; Cen, R.; Loeb, A.

    2014-07-01

    We present predictions for the 21 cm brightness temperature power spectrum during the Epoch of Reionization (EoR). We discuss the implications of the 'light cone' effect, which incorporates evolution of the neutral hydrogen fraction and 21 cm brightness temperature along the line of sight. Using a novel method calibrated against radiation-hydrodynamic simulations, we model the neutral hydrogen density field and 21 cm signal in large volumes (L = 2 Gpc h {sup –1}). The inclusion of the light cone effect leads to a relative decrease of about 50% in the 21 cm power spectrum on all scales. We also find that the effect is more prominent at the midpoint of reionization and later. The light cone effect can also introduce an anisotropy along the line of sight. By decomposing the 3D power spectrum into components perpendicular to and along the line of sight, we find that in our fiducial reionization model, there is no significant anisotropy. However, parallel modes can contribute up to 40% more power for shorter reionization scenarios. The scales on which the light cone effect is relevant are comparable to scales where one measures the baryon acoustic oscillation. We argue that due to its large comoving scale and introduction of anisotropy, the light cone effect is important when considering redshift space distortions and future application to the Alcock-Paczyński test for the determination of cosmological parameters.

  15. Bayesian constraints on the global 21-cm signal from the Cosmic Dawn

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; Zwart, J. T. L.; Price, D.; Greenhill, L. J.; Mesinger, A.; Dowell, J.; Eftekhari, T.; Ellingson, S. W.; Kocz, J.; Schinzel, F.

    2016-09-01

    The birth of the first luminous sources and the ensuing epoch of reionization are best studied via the redshifted 21-cm emission line, the signature of the first two imprinting the last. In this work, we present a fully Bayesian method, HIBAYES, for extracting the faint, global (sky-averaged) 21-cm signal from the much brighter foreground emission. We show that a simplified (but plausible) Gaussian model of the 21-cm emission from the Cosmic Dawn epoch (15 ≲ z ≲ 30), parametrized by an amplitude A_{H I}, a frequency peak ν _{H I} and a width σ _{H I}, can be extracted even in the presence of a structured foreground frequency spectrum (parametrized as a seventh-order polynomial), provided sufficient signal-to-noise (400 h of observation with a single dipole). We apply our method to an early, 19-min-long observation from the Large aperture Experiment to detect the Dark Ages, constraining the 21-cm signal amplitude and width to be -890 < A_{H I} < 0 mK and σ _{H I} > 6.5 MHz (corresponding to Δz > 1.9 at redshift z ≃ 20) respectively at the 95-per cent confidence level in the range 13.2 < z < 27.4 (100 > ν > 50 MHz).

  16. Galaxy-cluster masses via 21st-century measurements of lensing of 21-cm fluctuations

    NASA Astrophysics Data System (ADS)

    Kovetz, Ely D.; Kamionkowski, Marc

    2013-03-01

    We discuss the prospects to measure galaxy-cluster properties via weak lensing of 21-cm fluctuations from the dark ages and the epoch of reionization (EOR). We choose as a figure of merit the smallest cluster mass detectable through such measurements. We construct the minimum-variance quadratic estimator for the cluster mass based on lensing of 21-cm fluctuations at multiple redshifts. We discuss the tradeoff among frequency bandwidth, angular resolution, and the number of redshift shells available for a fixed noise level for the radio detectors. Observations of lensing of the 21-cm background from the dark ages will be capable of detecting M≳1012h-1M⊙ mass halos, but will require futuristic experiments to overcome the contaminating sources. Next-generation radio measurements of 21-cm fluctuations from the EOR will, however, have the sensitivity to detect galaxy clusters with halo masses M≳1013h-1M⊙, given enough observation time (for the relevant sky patch) and collecting area to maximize their resolution capabilities.

  17. The TIME-Pilot intensity mapping experiment

    NASA Astrophysics Data System (ADS)

    Crites, A. T.; Bock, J. J.; Bradford, C. M.; Chang, T. C.; Cooray, A. R.; Duband, L.; Gong, Y.; Hailey-Dunsheath, S.; Hunacek, J.; Koch, P. M.; Li, C. T.; O'Brient, R. C.; Prouve, T.; Shirokoff, E.; Silva, M. B.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2014-08-01

    TIME-Pilot is designed to make measurements from the Epoch of Reionization (EoR), when the first stars and galaxies formed and ionized the intergalactic medium. This will be done via measurements of the redshifted 157.7 um line of singly ionized carbon ([CII]). In particular, TIME-Pilot will produce the first detection of [CII] clustering fluctuations, a signal proportional to the integrated [CII] intensity, summed over all EoR galaxies. TIME-Pilot is thus sensitive to the emission from dwarf galaxies, thought to be responsible for the balance of ionizing UV photons, that will be difficult to detect individually with JWST and ALMA. A detection of [CII] clustering fluctuations would validate current theoretical estimates of the [CII] line as a new cosmological observable, opening the door for a new generation of instruments with advanced technology spectroscopic array focal planes that will map [CII] fluctuations to probe the EoR history of star formation, bubble size, and ionization state. Additionally, TIME-Pilot will produce high signal-to-noise measurements of CO clustering fluctuations, which trace the role of molecular gas in star-forming galaxies at redshifts 0 < z < 2. With its unique atmospheric noise mitigation, TIME-Pilot also significantly improves sensitivity for measuring the kinetic Sunyaev-Zel'dovich (kSZ) effect in galaxy clusters. TIME-Pilot will employ a linear array of spectrometers, each consisting of a parallel-plate diffraction grating. The spectrometer bandwidth covers 185-323 GHz to both probe the entire redshift range of interest and to include channels at the edges of the band for atmospheric noise mitigation. We illuminate the telescope with f/3 horns, which balances the desire to both couple to the sky with the best efficiency per beam, and to pack a large number of horns into the fixed field of view. Feedhorns couple radiation to the waveguide spectrometer gratings. Each spectrometer grating has 190 facets and provides resolving power

  18. The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes

    NASA Astrophysics Data System (ADS)

    Lopez-Honorez, Laura; Mena, Olga; Moliné, Ángeles; Palomares-Ruiz, Sergio; Vincent, Aaron C.

    2016-08-01

    Future dedicated radio interferometers, including HERA and SKA, are very promising tools that aim to study the epoch of reionization and beyond via measurements of the 21 cm signal from neutral hydrogen. Dark matter (DM) annihilations into charged particles change the thermal history of the Universe and, as a consequence, affect the 21 cm signal. Accurately predicting the effect of DM strongly relies on the modeling of annihilations inside halos. In this work, we use up-to-date computations of the energy deposition rates by the products from DM annihilations, a proper treatment of the contribution from DM annihilations in halos, as well as values of the annihilation cross section allowed by the most recent cosmological measurements from the Planck satellite. Given current uncertainties on the description of the astrophysical processes driving the epochs of reionization, X-ray heating and Lyman-α pumping, we find that disentangling DM signatures from purely astrophysical effects, related to early-time star formation processes or late-time galaxy X-ray emissions, will be a challenging task. We conclude that only annihilations of DM particles with masses of ~100 MeV, could leave an unambiguous imprint on the 21 cm signal and, in particular, on the 21 cm power spectrum. This is in contrast to previous, more optimistic results in the literature, which have claimed that strong signatures might also be present even for much higher DM masses. Additional measurements of the 21 cm signal at different cosmic epochs will be crucial in order to break the strong parameter degeneracies between DM annihilations and astrophysical effects and undoubtedly single out a DM imprint for masses different from ~100 MeV.

  19. THE SIGNATURES OF PARTICLE DECAY IN 21 cm ABSORPTION FROM THE FIRST MINIHALOS

    SciTech Connect

    Vasiliev, Evgenii O.; Shchekinov, Yuri A. E-mail: yus@sfedu.ru

    2013-11-01

    The imprint of decaying dark matter (DM) particles on the characteristics of the {sup 2}1 cm forest{sup —}absorption at 21 cm from minihalos in the spectra of distant radio-loud sources—is considered within a one-dimensional, self-consistent hydrodynamic description of minihalos from their turnaround point to virialization. The most pronounced influence of decaying DM on the evolution of minihalos is found in the mass range M = 10{sup 5}-10{sup 6} M{sub ☉}, for which unstable DM with a current upper limit on its ionization rate of ξ{sub L} = 0.59 × 10{sup –25} s{sup –1} reduces the 21 cm optical depth by an order of magnitude compared with the standard recombination scenario. Even a rather modest ionization, ξ ∼ 0.3ξ{sub L}, practically erases absorption features and results in a considerable decrease (by factor of more than 2.5) of the number of strong (W{sub ν}{sup obs}∼>0.3 kHz at z ≅ 10) absorptions. In such circumstances, broadband observations are more suitable for inferring the physical conditions of the absorbing gas. X-ray photons from stellar activity of the initial episodes of star formation can compete with the contribution from decaying DM only at z < 10. Therefore, observing the 21 cm signal will allow us to follow the evolution of decaying DM particles in the redshift range z = 10-15. On the other hand, a non-detection of the 21 cm signal in the frequency range ν < 140 MHz can establish a lower limit on the ionization rate from decaying DM.

  20. Simulating the 21 cm signal from reionization including non-linear ionizations and inhomogeneous recombinations

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2016-04-01

    We explore the impact of incorporating physically motivated ionization and recombination rates on the history and topology of cosmic reionization and the resulting 21 cm power spectrum, by incorporating inputs from small-volume hydrodynamic simulations into our semi-numerical code, SIMFAST21, that evolves reionization on large scales. We employ radiative hydrodynamic simulations to parametrize the ionization rate Rion and recombination rate Rrec as functions of halo mass, overdensity and redshift. We find that Rion scales superlinearly with halo mass ({R_ion}∝ M_h^{1.41}), in contrast to previous assumptions. Implementing these scalings into SIMFAST21, we tune our one free parameter, the escape fraction fesc, to simultaneously reproduce recent observations of the Thomson optical depth, ionizing emissivity and volume-averaged neutral fraction by the end of reionization. This yields f_esc=4^{+7}_{-2} per cent averaged over our 0.375 h-1 Mpc cells, independent of halo mass or redshift, increasing to 6 per cent if we also constrain to match the observed z = 7 star formation rate function. Introducing superlinear Rion increases the duration of reionization and boosts small-scale 21 cm power by two to three times at intermediate phases of reionization, while inhomogeneous recombinations reduce ionized bubble sizes and suppress large-scale 21 cm power by two to three times. Gas clumping on sub-cell scales has a minimal effect on the 21 cm power. Superlinear Rion also significantly increases the median halo mass scale for ionizing photon output to ˜ 1010 M⊙, making the majority of reionizing sources more accessible to next-generation facilities. These results highlight the importance of accurately treating ionizing sources and recombinations for modelling reionization and its 21 cm power spectrum.

  1. Cosmologically probing ultra-light particle dark matter using 21 cm signals

    SciTech Connect

    Kadota, Kenji; Mao, Yi; Silk, Joseph; Ichiki, Kiyomoto E-mail: mao@iap.fr E-mail: j.silk1@physics.ox.ac.uk

    2014-06-01

    There can arise ubiquitous ultra-light scalar fields in the Universe, such as the pseudo-Goldstone bosons from the spontaneous breaking of an approximate symmetry, which can make a partial contribution to the dark matter and affect the large scale structure of the Universe. While the properties of those ultra-light dark matter are heavily model dependent and can vary in a wide range, we develop a model-independent analysis to forecast the constraints on their mass and abundance using futuristic but realistic 21 cm observables as well as CMB fluctuations, including CMB lensing measurements. Avoiding the highly nonlinear regime, the 21 cm emission line spectra are most sensitive to the ultra-light dark matter with mass m ∼ 10{sup −26} eV for which the precision attainable on mass and abundance bounds can be of order of a few percent.

  2. THE APPLICATION OF CONTINUOUS WAVELET TRANSFORM BASED FOREGROUND SUBTRACTION METHOD IN 21 cm SKY SURVEYS

    SciTech Connect

    Gu Junhua; Xu Haiguang; Wang Jingying; Chen Wen; An Tao

    2013-08-10

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  3. The Effects of Polarized Foregrounds on 21 cm Epoch of Reionization Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Moore, David F.; Aguirre, James E.; Parsons, Aaron R.; Jacobs, Daniel C.; Pober, Jonathan C.

    2013-06-01

    Experiments aimed at detecting highly-redshifted 21 cm emission from the epoch of reionization (EoR) are plagued by the contamination of foreground emission. A potentially important source of contaminating foregrounds may be Faraday-rotated, polarized emission, which leaks into the estimate of the intrinsically unpolarized EoR signal. While these foregrounds' intrinsic polarization may not be problematic, the spectral structure introduced by the Faraday rotation could be. To better understand and characterize these effects, we present a simulation of the polarized sky between 120 and 180 MHz. We compute a single visibility, and estimate the three-dimensional power spectrum from that visibility using the delay spectrum approach presented in Parsons et al. Using the Donald C. Backer Precision Array to Probe the Epoch of Reionization as an example instrument, we show the expected leakage into the unpolarized power spectrum to be several orders of magnitude above the expected 21 cm EoR signal.

  4. THE EFFECTS OF POLARIZED FOREGROUNDS ON 21 cm EPOCH OF REIONIZATION POWER SPECTRUM MEASUREMENTS

    SciTech Connect

    Moore, David F.; Aguirre, James E.; Parsons, Aaron R.; Pober, Jonathan C.; Jacobs, Daniel C.

    2013-06-01

    Experiments aimed at detecting highly-redshifted 21 cm emission from the epoch of reionization (EoR) are plagued by the contamination of foreground emission. A potentially important source of contaminating foregrounds may be Faraday-rotated, polarized emission, which leaks into the estimate of the intrinsically unpolarized EoR signal. While these foregrounds' intrinsic polarization may not be problematic, the spectral structure introduced by the Faraday rotation could be. To better understand and characterize these effects, we present a simulation of the polarized sky between 120 and 180 MHz. We compute a single visibility, and estimate the three-dimensional power spectrum from that visibility using the delay spectrum approach presented in Parsons et al. Using the Donald C. Backer Precision Array to Probe the Epoch of Reionization as an example instrument, we show the expected leakage into the unpolarized power spectrum to be several orders of magnitude above the expected 21 cm EoR signal.

  5. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  6. Numerical simulation of soil brightness temperatures at wavelength of 21 cm

    NASA Technical Reports Server (NTRS)

    Mo, T.; Schmugge, T. J.

    1981-01-01

    A simulation model is applied to reproduce some observed brightness temperatures at a wavelength of 21 cm. The simulated results calculated with two different soil textures are compared directly with observations measured over fields in Arizona and South Dakota. It is found that good agreement is possible by properly adjusting the surface roughness parameter. Correlation analysis and linear regression of the brightness temperatures versus soil moistures are also carried out.

  7. The imprint of the cosmic supermassive black hole growth history on the 21 cm background radiation

    NASA Astrophysics Data System (ADS)

    Tanaka, Takamitsu L.; O'Leary, Ryan M.; Perna, Rosalba

    2016-01-01

    The redshifted 21 cm transition line of hydrogen tracks the thermal evolution of the neutral intergalactic medium (IGM) at `cosmic dawn', during the emergence of the first luminous astrophysical objects (˜100 Myr after the big bang) but before these objects ionized the IGM (˜400-800 Myr after the big bang). Because X-rays, in particular, are likely to be the chief energy courier for heating the IGM, measurements of the 21 cm signature can be used to infer knowledge about the first astrophysical X-ray sources. Using analytic arguments and a numerical population synthesis algorithm, we argue that the progenitors of supermassive black holes (SMBHs) should be the dominant source of hard astrophysical X-rays - and thus the primary driver of IGM heating and the 21 cm signature - at redshifts z ≳ 20, if (i) they grow readily from the remnants of Population III stars and (ii) produce X-rays in quantities comparable to what is observed from active galactic nuclei and high-mass X-ray binaries. We show that models satisfying these assumptions dominate over contributions to IGM heating from stellar populations, and cause the 21 cm brightness temperature to rise at z ≳ 20. An absence of such a signature in the forthcoming observational data would imply that SMBH formation occurred later (e.g. via so-called direct collapse scenarios), that it was not a common occurrence in early galaxies and protogalaxies, or that it produced far fewer X-rays than empirical trends at lower redshifts, either due to intrinsic dimness (radiative inefficiency) or Compton-thick obscuration close to the source.

  8. 21-cm signature of the first sources in the Universe: Prospects of detection with SKA

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.

    2016-04-01

    Currently several low-frequency experiments are being planned to study the nature of the first stars using the redshifted 21-cm signal from the cosmic dawn and epoch of reionization. Using a one-dimensional radiative transfer code, we model the 21-cm signal pattern around the early sources for different source models, i.e., the metal-free Population III (PopIII) stars, primordial galaxies consisting of Population II (PopII) stars, mini-QSOs and high-mass X-ray binaries (HMXBs). We investigate the detectability of these sources by comparing the 21-cm visibility signal with the system noise appropriate for a telescope like the SKA1-low. Upon integrating the visibility around a typical source over all baselines and over a frequency interval of 16 MHz, we find that it will be possible make a ˜9 - σ detection of the isolated sources like PopII galaxies, mini-QSOs and HMXBs at z ˜ 15 with the SKA1-low in 1000 hours. The exact value of the signal to noise ratio (SNR) will depend on the source properties, in particular on the mass and age of the source and the escape fraction of ionizing photons. The predicted SNR decreases with increasing redshift. We provide simple scaling laws to estimate the SNR for different values of the parameters which characterize the source and the surrounding medium. We also argue that it will be possible to achieve a SNR ˜9 even in the presence of the astrophysical foregrounds by subtracting out the frequency-independent component of the observed signal. These calculations will be useful in planning 21-cm observations to detect the first sources.

  9. 21-cm signature of the first sources in the Universe: prospects of detection with SKA

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.

    2016-07-01

    Currently several low-frequency experiments are being planned to study the nature of the first stars using the redshifted 21-cm signal from the cosmic dawn and Epoch of Reionization. Using a one-dimensional radiative transfer code, we model the 21-cm signal pattern around the early sources for different source models, i.e. the metal-free Population III (PopIII) stars, primordial galaxies consisting of Population II (PopII) stars, mini-QSOs and high-mass X-ray binaries (HMXBs). We investigate the detectability of these sources by comparing the 21-cm visibility signal with the system noise appropriate for a telescope like the SKA1-low. Upon integrating the visibility around a typical source over all baselines and over a frequency interval of 16 MHz, we find that it will be possible to make a ˜9σ detection of the isolated sources like PopII galaxies, mini-QSOs and HMXBs at z ˜ 15 with the SKA1-low in 1000 h. The exact value of the signal-to-noise ratio (SNR) will depend on the source properties, in particular on the mass and age of the source and the escape fraction of ionizing photons. The predicted SNR decreases with increasing redshift. We provide simple scaling laws to estimate the SNR for different values of the parameters which characterize the source and the surrounding medium. We also argue that it will be possible to achieve an SNR ˜9 even in the presence of the astrophysical foregrounds by subtracting out the frequency-independent component of the observed signal. These calculations will be useful in planning 21-cm observations to detect the first sources.

  10. Cosmic Reionization On Computers. Mean and Fluctuating Redshifted 21 cm Signal

    NASA Astrophysics Data System (ADS)

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-01

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ˜ 10–15 only extends to < {{Δ }}{T}B> ˜ -25 {{mK}}, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ˜ 0.1–1h Mpc‑1. This scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.

  11. Statistics of 21-cm fluctuations in cosmic reionization simulations: PDFs and difference PDFs

    NASA Astrophysics Data System (ADS)

    Gluscevic, Vera; Barkana, Rennan

    2010-11-01

    In the coming decade, low-frequency radio arrays will begin to probe the epoch of reionization via the redshifted 21-cm hydrogen line. Successful interpretation of these observations will require effective statistical techniques for analysing the data. Due to the difficulty of these measurements, it is important to develop techniques beyond the standard power-spectrum analysis in order to offer independent confirmation of the reionization history, probe different aspects of the topology of reionization and have different systematic errors. In order to assess the promise of probability distribution functions (PDFs) as statistical analysis tools in 21-cm cosmology, we first measure the 21-cm brightness temperature (one-point) PDFs in six different reionization simulations. We then parametrize their most distinct features by fitting them to a simple model. Using the same simulations, we also present the first measurements of difference PDFs in simulations of reionization. We find that while these statistics probe the properties of the ionizing sources, they are relatively independent of small-scale, subgrid astrophysics. We discuss the additional information that the difference PDF can provide on top of the power spectrum and the one-point PDF.

  12. OPENING THE 21 cm EPOCH OF REIONIZATION WINDOW: MEASUREMENTS OF FOREGROUND ISOLATION WITH PAPER

    SciTech Connect

    Pober, Jonathan C.; Parsons, Aaron R.; Ali, Zaki; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, Dave; Dexter, Matthew; MacMahon, Dave; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Patricia J.; Manley, Jason; Walbrugh, William P.; Stefan, Irina I.

    2013-05-10

    We present new observations with the Precision Array for Probing the Epoch of Reionization with the aim of measuring the properties of foreground emission for 21 cm epoch of reionization (EoR) experiments at 150 MHz. We focus on the footprint of the foregrounds in cosmological Fourier space to understand which modes of the 21 cm power spectrum will most likely be compromised by foreground emission. These observations confirm predictions that foregrounds can be isolated to a {sup w}edge{sup -}like region of two-dimensional (k , k{sub Parallel-To })-space, creating a window for cosmological studies at higher k{sub Parallel-To} values. We also find that the emission extends past the nominal edge of this wedge due to spectral structure in the foregrounds, with this feature most prominent on the shortest baselines. Finally, we filter the data to retain only this ''unsmooth'' emission and image its specific k{sub Parallel-To} modes. The resultant images show an excess of power at the lowest modes, but no emission can be clearly localized to any one region of the sky. This image is highly suggestive that the most problematic foregrounds for 21 cm EoR studies will not be easily identifiable bright sources, but rather an aggregate of fainter emission.

  13. Detecting the integrated Sachs-Wolfe effect with high-redshift 21-cm surveys

    NASA Astrophysics Data System (ADS)

    Raccanelli, Alvise; Kovetz, Ely; Dai, Liang; Kamionkowski, Marc

    2016-04-01

    We investigate the possibility of detecting the integrated Sachs-Wolfe (ISW) effect by cross-correlating 21-cm surveys at high redshifts with galaxies in a way similar to the usual CMB-galaxy cross-correlation. The high-redshift 21-cm signal is dominated by CMB photons that travel freely without interacting with the intervening matter, and hence its late-time ISW signature should correlate extremely well with that of the CMB at its peak frequencies. Using the 21-cm temperature brightness instead of the CMB would thus be a further check of the detection of the ISW effect, measured by different instruments at different frequencies and suffering from different systematics. We also study the ISW effect on the photons that are scattered by HI clouds. We show that a detection of the unscattered photons is achievable with planned radio arrays, while one using scattered photons will require advanced radio interferometers, either an extended version of the planned Square Kilometre Array or futuristic experiments such as a lunar radio array.

  14. Cosmic reionization on computers. Mean and fluctuating redshifted 21 CM signal

    DOE PAGESBeta

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-20

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ~ 10–15 only extends tomore » $$\\langle {\\rm{\\Delta }}{T}_{B}\\rangle \\sim -25\\,{\\rm{mK}}$$, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ~ 0.1–1h Mpc–1. Furthermore, this scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.« less

  15. Signatures of modified gravity on the 21 cm power spectrum at reionisation

    SciTech Connect

    Brax, Philippe

    2013-01-01

    Scalar modifications of gravity have an impact on the growth of structure. Baryon and Cold Dark Matter (CDM) perturbations grow anomalously for scales within the Compton wavelength of the scalar field. In the late time Universe when reionisation occurs, the spectrum of the 21 cm brightness temperature is thus affected. We study this effect for chameleon-f(R) models, dilatons and symmetrons. Although the f(R) models are more tightly constrained by solar system bounds, and effects on dilaton models are negligible, we find that symmetrons where the phase transition occurs before z{sub *} ∼ 12 could be detectable for a scalar field range as low as 5kpc. For all these models, the detection prospects of modified gravity effects are higher when considering modes parallel to the line of sight where very small scales can be probed. The study of the 21 cm spectrum thus offers a complementary approach to testing modified gravity with large scale structure surveys. Short scales, which would be highly non-linear in the very late time Universe when structure forms and where modified gravity effects are screened, appear in the linear spectrum of 21 cm physics, hence deviating from General Relativity in a maximal way.

  16. Opening the 21 cm Epoch of Reionization Window: Measurements of Foreground Isolation with PAPER

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki; Bradley, Richard F.; Carilli, Chris L.; DeBoer, Dave; Dexter, Matthew; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Patricia J.; MacMahon, Dave; Manley, Jason; Moore, David F.; Stefan, Irina I.; Walbrugh, William P.

    2013-05-01

    We present new observations with the Precision Array for Probing the Epoch of Reionization with the aim of measuring the properties of foreground emission for 21 cm epoch of reionization (EoR) experiments at 150 MHz. We focus on the footprint of the foregrounds in cosmological Fourier space to understand which modes of the 21 cm power spectrum will most likely be compromised by foreground emission. These observations confirm predictions that foregrounds can be isolated to a "wedge"-like region of two-dimensional (k , k ∥)-space, creating a window for cosmological studies at higher k ∥ values. We also find that the emission extends past the nominal edge of this wedge due to spectral structure in the foregrounds, with this feature most prominent on the shortest baselines. Finally, we filter the data to retain only this "unsmooth" emission and image its specific k ∥ modes. The resultant images show an excess of power at the lowest modes, but no emission can be clearly localized to any one region of the sky. This image is highly suggestive that the most problematic foregrounds for 21 cm EoR studies will not be easily identifiable bright sources, but rather an aggregate of fainter emission.

  17. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  18. Challenges and opportunities in mapping land use intensity globally☆

    PubMed Central

    Kuemmerle, Tobias; Erb, Karlheinz; Meyfroidt, Patrick; Müller, Daniel; Verburg, Peter H; Estel, Stephan; Haberl, Helmut; Hostert, Patrick; Jepsen, Martin R.; Kastner, Thomas; Levers, Christian; Lindner, Marcus; Plutzar, Christoph; Verkerk, Pieter Johannes; van der Zanden, Emma H; Reenberg, Anette

    2013-01-01

    Future increases in land-based production will need to focus more on sustainably intensifying existing production systems. Unfortunately, our understanding of the global patterns of land use intensity is weak, partly because land use intensity is a complex, multidimensional term, and partly because we lack appropriate datasets to assess land use intensity across broad geographic extents. Here, we review the state of the art regarding approaches for mapping land use intensity and provide a comprehensive overview of available global-scale datasets on land use intensity. We also outline major challenges and opportunities for mapping land use intensity for cropland, grazing, and forestry systems, and identify key issues for future research. PMID:24143157

  19. H I 21 cm ABSORPTION AND UNIFIED SCHEMES OF ACTIVE GALACTIC NUCLEI

    SciTech Connect

    Curran, S. J.; Whiting, M. T.

    2010-03-20

    In a recent study of z >= 0.1 active galactic nuclei (AGNs), we found that 21 cm absorption has never been detected in objects in which the ultraviolet luminosity exceeds L{sub UV} {approx} 10{sup 23} W Hz{sup -1}. In this paper, we further explore the implications that this has for the currently popular consensus that it is the orientation of the circumnuclear obscuring torus, invoked by unified schemes of AGNs, which determines whether absorption is present along our sight line. The fact that at L{sub UV} {approx}< 10{sup 23} W Hz{sup -1}, both type-1 and type-2 objects exhibit a 50% probability of detection, suggests that this is not the case and that the bias against detection of H I absorption in type-1 objects is due purely to the inclusion of the L{sub UV} {approx}> 10{sup 23} W Hz{sup -1} sources. Similarly, the ultraviolet luminosities can also explain why the presence of 21 cm absorption shows a preference for radio galaxies over quasars and the higher detection rate in compact sources, such as compact steep spectrum or gigahertz peaked spectrum sources, may also be biased by the inclusion of high-luminosity sources. Being comprised of all 21 cm searched sources at z >= 0.1, this is a necessarily heterogeneous sample, the constituents of which have been observed by various instruments. By this same token, however, the dependence on the UV luminosity may be an all encompassing effect, superseding the unified schemes model, although there is the possibility that the exclusive 21 cm non-detections at high UV luminosities could be caused by a bias toward gas-poor ellipticals. Additionally, the high UV fluxes could be sufficiently exciting/ionizing the H I above 21 cm detection thresholds, although the extent to which this is related to the neutral gas deficit in ellipticals is currently unclear. Examining the moderate UV luminosity (L{sub UV} {approx}< 10{sup 23} W Hz{sup -1}) sample further, from the profile widths and offsets from the systemic velocities

  20. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  1. 21CMMC: an MCMC analysis tool enabling astrophysical parameter studies of the cosmic 21 cm signal

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei

    2015-06-01

    We introduce 21 CMMC: a parallelized, Monte Carlo Markov Chain analysis tool, incorporating the epoch of reionization (EoR) seminumerical simulation 21 CMFAST. 21 CMMC estimates astrophysical parameter constraints from 21 cm EoR experiments, accommodating a variety of EoR models, as well as priors on model parameters and the reionization history. To illustrate its utility, we consider two different EoR scenarios, one with a single population of galaxies (with a mass-independent ionizing efficiency) and a second, more general model with two different, feedback-regulated populations (each with mass-dependent ionizing efficiencies). As an example, combining three observations (z = 8, 9 and 10) of the 21 cm power spectrum with a conservative noise estimate and uniform model priors, we find that interferometers with specifications like the Low Frequency Array/Hydrogen Epoch of Reionization Array (HERA)/Square Kilometre Array 1 (SKA1) can constrain common reionization parameters: the ionizing efficiency (or similarly the escape fraction), the mean free path of ionizing photons and the log of the minimum virial temperature of star-forming haloes to within 45.3/22.0/16.7, 33.5/18.4/17.8 and 6.3/3.3/2.4 per cent, ˜1σ fractional uncertainty, respectively. Instead, if we optimistically assume that we can perfectly characterize the EoR modelling uncertainties, we can improve on these constraints by up to a factor of ˜few. Similarly, the fractional uncertainty on the average neutral fraction can be constrained to within ≲ 10 per cent for HERA and SKA1. By studying the resulting impact on astrophysical constraints, 21 CMMC can be used to optimize (i) interferometer designs; (ii) foreground cleaning algorithms; (iii) observing strategies; (iv) alternative statistics characterizing the 21 cm signal; and (v) synergies with other observational programs.

  2. Primordial non-gaussianity from the bispectrum of 21-cm fluctuations in the dark ages

    NASA Astrophysics Data System (ADS)

    Muñoz, Julian B.; Ali-Haïmoud, Yacine; Kamionkowski, Marc

    2015-10-01

    A measurement of primordial non-Gaussianity will be of paramount importance to distinguish between different models of inflation. Cosmic microwave background (CMB) anisotropy observations have set unprecedented bounds on the non-Gaussianity parameter fNL but the interesting regime fNL≲1 is beyond their reach. Brightness-temperature fluctuations in the 21-cm line during the dark ages (z ˜30 - 100 ) are a promising successor to CMB studies, giving access to a much larger number of modes. They are, however, intrinsically nonlinear, which results in secondary non-gaussianities orders of magnitude larger than the sought-after primordial signal. In this paper we carefully compute the primary and secondary bispectra of 21-cm fluctuations on small scales. We use the flat-sky formalism, which greatly simplifies the analysis, while still being very accurate on small angular scales. We show that the secondary bispectrum is highly degenerate with the primordial one, and argue that even percent-level uncertainties in the amplitude of the former lead to a bias of order Δ fNL˜10 . To tackle this problem we carry out a detailed Fisher analysis, marginalizing over the amplitudes of a few smooth redshift-dependent coefficients characterizing the secondary bispectrum. We find that the signal-to-noise ratio for a single redshift slice is reduced by a factor of ˜5 in comparison to a case without secondary non-gaussianities. Setting aside foreground contamination, we forecast that a cosmic-variance-limited experiment observing 21-cm fluctuations over 30 ≤z ≤100 with a 0.1-MHz bandwidth and 0.1 arc min angular resolution could achieve a sensitivity of order fNLlocal˜0.03 , fNLequil˜0.04 and fNLortho˜0.03 .

  3. A comparative study of intervening and associated H I 21-cm absorption profiles in redshifted galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Duchesne, S. W.; Divoli, A.; Allison, J. R.

    2016-08-01

    The star-forming reservoir in the distant Universe can be detected through H I 21-cm absorption arising from either cool gas associated with a radio source or from within a galaxy intervening the sight-line to the continuum source. In order to test whether the nature of the absorber can be predicted from the profile shape, we have compiled and analysed all of the known redshifted (z ≥ 0.1) H I 21-cm absorption profiles. Although between individual spectra there is too much variation to assign a typical spectral profile, we confirm that associated absorption profiles are, on average, wider than their intervening counterparts. It is widely hypothesised that this is due to high velocity nuclear gas feeding the central engine, absent in the more quiescent intervening absorbers. Modelling the column density distribution of the mean associated and intervening spectra, we confirm that the additional low optical depth, wide dispersion component, typical of associated absorbers, arises from gas within the inner parsec. With regard to the potential of predicting the absorber type in the absence of optical spectroscopy, we have implemented machine learning techniques to the 55 associated and 43 intervening spectra, with each of the tested models giving a ≳80% accuracy in the prediction of the absorber type. Given the impracticability of follow-up optical spectroscopy of the large number of 21-cm detections expected from the next generation of large radio telescopes, this could provide a powerful new technique with which to determine the nature of the absorbing galaxy.

  4. Extracting Physical Parameters for the First Galaxies from the Cosmic Dawn Global 21-cm Spectrum

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Mirocha, Jordan; harker, geraint; Tauscher, Keith; Datta, Abhirup

    2016-01-01

    The all-sky or global redshifted 21-cm HI signal is a potentially powerful probe of the first luminous objects and their environs during the transition from the Dark Ages to Cosmic Dawn (35 > z > 6). The first stars, black holes, and galaxies heat and ionize the surrounding intergalactic medium, composed mainly of neutral hydrogen, so the hyperfine 21-cm transition can be used to indirectly study these early radiation sources. The properties of these objects can be examined via the broad absorption and emission features that are expected in the spectrum. The Dark Ages Radio Explorer (DARE) is proposed to conduct these observations at low radio astronomy frequencies, 40-120 MHz, in a 125 km orbit about the Moon. The Moon occults both the Earth and the Sun as DARE makes observations above the lunar farside, thus eliminating the corrupting effects from Earth's ionosphere, radio frequency interference, and solar nanoflares. The signal is extracted from the galactic/extragalactic foreground employing Bayesian methods, including Markov Chain Monte Carlo (MCMC) techniques. Theory indicates that the 21-cm signal is well described by a model in which the evolution of various physical quantities follows a hyperbolic tangent (tanh) function of redshift. We show that this approach accurately captures degeneracies and covariances between parameters, including those related to the signal, foreground, and the instrument. Furthermore, we also demonstrate that MCMC fits will set meaningful constraints on the Ly-α, ionizing, and X-ray backgrounds along with the minimum virial temperature of the first star-forming halos.

  5. On the Detection of Global 21-cm Signal from Reionization Using Interferometers

    NASA Astrophysics Data System (ADS)

    Singh, Saurabh; Subrahmanyan, Ravi; Udaya Shankar, N.; Raghunathan, A.

    2015-12-01

    Detection of the global redshifted 21-cm signal is an excellent means of deciphering the physical processes during the Dark Ages and subsequent Epoch of Reionization (EoR). However, detection of this faint monopole is challenging due to the high precision required in instrumental calibration and modeling of substantially brighter foregrounds and instrumental systematics. In particular, modeling of receiver noise with mK accuracy and its separation remains a formidable task in experiments aiming to detect the global signal using single-element spectral radiometers. Interferometers do not respond to receiver noise; therefore, here we explore the theory of the response of interferometers to global signals. In other words, we discuss the spatial coherence in the electric field arising from the monopole component of the 21-cm signal and methods for its detection using sensor arrays. We proceed by first deriving the response to uniform sky of two-element interferometers made of unit dipole and resonant loop antennas, then extend the analysis to interferometers made of one-dimensional arrays and also consider two-dimensional aperture antennas. Finally, we describe methods by which the coherence might be enhanced so that the interferometer measurements yield improved sensitivity to the monopole component. We conclude (a) that it is indeed possible to measure the global 21-cm from EoR using interferometers, (b) that a practically useful configuration is with omnidirectional antennas as interferometer elements, and (c) that the spatial coherence may be enhanced using, for example, a space beam splitter between the interferometer elements.

  6. Parametrizations of the 21-cm global signal and parameter estimation from single-dipole experiments

    NASA Astrophysics Data System (ADS)

    Harker, Geraint J. A.; Mirocha, Jordan; Burns, Jack O.; Pritchard, Jonathan R.

    2016-02-01

    One approach to extracting the global 21-cm signal from total-power measurements at low radio frequencies is to parametrize the different contributions to the data and then fit for these parameters. We examine parametrizations of the 21-cm signal itself, and propose one based on modelling the Ly α background, intergalactic medium temperature and hydrogen ionized fraction using tanh functions. This captures the shape of the signal from a physical modelling code better than an earlier parametrization based on interpolating between maxima and minima of the signal, and imposes a greater level of physical plausibility. This allows less biased constraints on the turning points of the signal, even though these are not explicitly fit for. Biases can also be alleviated by discarding information which is less robustly described by the parametrization, for example by ignoring detailed shape information coming from the covariances between turning points or from the high-frequency parts of the signal, or by marginalizing over the high-frequency parts of the signal by fitting a more complex foreground model. The fits are sufficiently accurate to be usable for experiments gathering 1000 h of data, though in this case it may be important to choose observing windows which do not include the brightest areas of the foregrounds. Our assumption of pointed, single-antenna observations and very broad-band fitting makes these results particularly applicable to experiments such as the Dark Ages Radio Explorer, which would study the global 21-cm signal from the clean environment of a low lunar orbit, taking data from the far side.

  7. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}ii. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  8. 21-cm lensing and the cold spot in the cosmic microwave background.

    PubMed

    Kovetz, Ely D; Kamionkowski, Marc

    2013-04-26

    An extremely large void and a cosmic texture are two possible explanations for the cold spot seen in the cosmic microwave background. We investigate how well these two hypotheses can be tested with weak lensing of 21-cm fluctuations from the epoch of reionization measured with the Square Kilometer Array. While the void explanation for the cold spot can be tested with Square Kilometer Array, given enough observation time, the texture scenario requires significantly prolonged observations, at the highest frequencies that correspond to the epoch of reionization, over the field of view containing the cold spot. PMID:23679703

  9. Comparison of 2.8- and 21-cm microwave radiometer observations over soils with emission model calculations

    NASA Technical Reports Server (NTRS)

    Burke, W. J.; Schmugge, T.; Paris, J. F.

    1979-01-01

    An airborne experiment was conducted under NASA auspices to test the feasibility of detecting soil moisture by microwave remote sensing techniques over agricultural fields near Phoenix, Arizona at midday of April 5, 1974 and at dawn of the following day. Extensive ground data were obtained from 96 bare, sixteen hectare fields. Observations made using a scanning (2.8 cm) and a nonscanning (21 cm) radiometer were compared with the predictions of a radiative transfer emission model. It is shown that (1) the emitted intensity at both wavelengths correlates best with the near surface moisture, (2) surface roughness is found to more strongly affect the degree of polarization than the emitted intensity, (3) the slope of the intensity-moisture curves decreases in going from day to dawn, and (4) increased near surface moisture at dawn is characterized by increased polarization of emissions. The results of the experiment indicate that microwave techniques can be used to observe the history of the near surface moisture. The subsurface history must be inferred from soil physics models which use microwave results as boundary conditions.

  10. INTENSITY MAPPING OF THE [C II] FINE STRUCTURE LINE DURING THE EPOCH OF REIONIZATION

    SciTech Connect

    Gong Yan; Cooray, Asantha; Silva, Marta; Santos, Mario G.; Bock, James; Bradford, C. Matt; Zemcov, Michael

    2012-01-20

    The atomic C II fine-structure line is one of the brightest lines in a typical star-forming galaxy spectrum with a luminosity {approx}0.1%-1% of the bolometric luminosity. It is potentially a reliable tracer of the dense gas distribution at high redshifts and could provide an additional probe to the era of reionization. By taking into account the spontaneous, stimulated, and collisional emission of the C II line, we calculate the spin temperature and the mean intensity as a function of the redshift. When averaged over a cosmologically large volume, we find that the C II emission from ionized carbon in individual galaxies is larger than the signal generated by carbon in the intergalactic medium. Assuming that the C II luminosity is proportional to the carbon mass in dark matter halos, we also compute the power spectrum of the C II line intensity at various redshifts. In order to avoid the contamination from CO rotational lines at low redshift when targeting a C II survey at high redshifts, we propose the cross-correlation of C II and 21 cm line emission from high redshifts. To explore the detectability of the C II signal from reionization, we also evaluate the expected errors on the C II power spectrum and C II-21 cm cross power spectrum based on the design of the future millimeter surveys. We note that the C II-21 cm cross power spectrum contains interesting features that capture physics during reionization, including the ionized bubble sizes and the mean ionization fraction, which are challenging to measure from 21 cm data alone. We propose an instrumental concept for the reionization C II experiment targeting the frequency range of {approx}200-300 GHz with 1, 3, and 10 m apertures and a bolometric spectrometer array with 64 independent spectral pixels with about 20,000 bolometers.

  11. Effects of the sources of reionization on 21-cm redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Majumdar, Suman; Jensen, Hannes; Mellema, Garrelt; Chapman, Emma; Abdalla, Filipe B.; Lee, Kai-Yan; Iliev, Ilian T.; Dixon, Keri L.; Datta, Kanan K.; Ciardi, Benedetta; Fernandez, Elizabeth R.; Jelić, Vibor; Koopmans, Léon V. E.; Zaroubi, Saleem

    2016-02-01

    The observed 21 cm signal from the epoch of reionization will be distorted along the line of sight by the peculiar velocities of matter particles. These redshift-space distortions will affect the contrast in the signal and will also make it anisotropic. This anisotropy contains information about the cross-correlation between the matter density field and the neutral hydrogen field, and could thus potentially be used to extract information about the sources of reionization. In this paper, we study a collection of simulated reionization scenarios assuming different models for the sources of reionization. We show that the 21 cm anisotropy is best measured by the quadrupole moment of the power spectrum. We find that, unless the properties of the reionization sources are extreme in some way, the quadrupole moment evolves very predictably as a function of global neutral fraction. This predictability implies that redshift-space distortions are not a very sensitive tool for distinguishing between reionization sources. However, the quadrupole moment can be used as a model-independent probe for constraining the reionization history. We show that such measurements can be done to some extent by first-generation instruments such as LOFAR, while the SKA should be able to measure the reionization history using the quadrupole moment of the power spectrum to great accuracy.

  12. 21 cm signal from cosmic dawn - II. Imprints of the light-cone effects

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Datta, Kanan K.; Choudhury, T. Roy

    2015-11-01

    Details of various unknown physical processes during the cosmic dawn and the epoch of reionization can be extracted from observations of the redshifted 21 cm signal. These observations, however, will be affected by the evolution of the signal along the line of sight which is known as the `light-cone effect'. We model this effect by post-processing a dark matter N-body simulation with an 1D radiative transfer code. We find that the effect is much stronger and dramatic in presence of inhomogeneous heating and Ly α coupling compared to the case where these processes are not accounted for. One finds increase (decrease) in the spherically averaged power spectrum up to a factor of 3 (0.6) at large scales (k ˜ 0.05 Mpc- 1) when the light-cone effect is included, though these numbers are highly dependent on the source model. The effect is particularly significant near the peak and dip-like features seen in the power spectrum. The peaks and dips are suppressed and thus the power spectrum can be smoothed out to a large extent if the width of the frequency band used in the experiment is large. We argue that it is important to account for the light-cone effect for any 21-cm signal prediction during cosmic dawn.

  13. A fast method for power spectrum and foreground analysis for 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Liu, Adrian; Tegmark, Max

    2013-02-01

    We develop and demonstrate an acceleration of the Liu and Tegmark quadratic estimator formalism for inverse variance foreground subtraction and power spectrum estimation in 21 cm tomography from O(N3) to O(Nlog⁡N), where N is the number of voxels of data. This technique makes feasible the megavoxel scale analysis necessary for current and upcoming radio interferometers by making only moderately restrictive assumptions about foreground models and survey geometry. We exploit iterative and Monte Carlo techniques and the symmetries of the foreground covariance matrices to quickly estimate the 21 cm brightness temperature power spectrum, P(k∥,k⊥), the Fisher information matrix, the error bars, the window functions, and the bias. We also extend the Liu and Tegmark foreground model to include bright point sources with known positions in a way that scales as O[(Nlog⁡N)×(Npointsources)]≤O(N5/3). As a first application of our method, we forecast error bars and window functions for the upcoming 128-tile deployment of the Murchinson Widefield Array, showing that 1000 hours of observation should prove sufficiently sensitive to detect the power spectrum signal from the Epoch of Reionization.

  14. A Low-cost 21 cm Horn-antenna Radio Telescope for Education and Outreach

    NASA Astrophysics Data System (ADS)

    Patel, Nimesh A.; Patel, Rishi N; Kimberk, Robert S; Test, John H; Krolewski, Alex; Ryan, James; Karkare, Kirit S; Kovac, John M; Dame, Thomas M.

    2014-06-01

    Small radio telescopes (1-3m) for observations of the 21 cm hydrogen line are widely used for education and outreach. A pyramidal horn was used by Ewen & Purcell (1951) to first detect the 21cm line at Harvard. Such a horn is simple to design and build, compared to a parabolic antenna which is usually purchased ready-made. Here we present a design of a horn antenna radio telescope that can be built entirely by students, using simple components costing less than $300. The horn has an aperture of 75 cm along the H-plane, 59 cm along the E-plane, and gain of about 20 dB. The receiver system consists of low noise amplifiers, band-pass filters and a software-defined-radio USB receiver that provides digitized samples for spectral processing in a computer. Starting from construction of the horn antenna, and ending with the measurement of the Galactic rotation curve, took about 6 weeks, as part of an undergraduate course at Harvard University. The project can also grow towards building a two-element interferometer for follow-up studies.

  15. Cosmological signatures of tilted isocurvature perturbations: reionization and 21cm fluctuations

    SciTech Connect

    Sekiguchi, Toyokazu; Sugiyama, Naoshi; Tashiro, Hiroyuki; Silk, Joseph E-mail: hiroyuki.tashiro@asu.edu E-mail: naoshi@nagoya-u.jp

    2014-03-01

    We investigate cosmological signatures of uncorrelated isocurvature perturbations whose power spectrum is blue-tilted with spectral index 2∼21cm line fluctuations due to neutral hydrogens in minihalos. Combination of measurements of the reionization optical depth and 21cm line fluctuations will provide complementary probes of a highly blue-tilted isocurvature power spectrum.

  16. Violation of statistical isotropy and homogeneity in the 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Shiraishi, Maresuke; Muñoz, Julian B.; Kamionkowski, Marc; Raccanelli, Alvise

    2016-05-01

    Most inflationary models predict primordial perturbations to be statistically isotropic and homogeneous. Cosmic microwave background (CMB) observations, however, indicate a possible departure from statistical isotropy in the form of a dipolar power modulation at large angular scales. Alternative models of inflation, beyond the simplest single-field slow-roll models, can generate a small power asymmetry, consistent with these observations. Observations of clustering of quasars show, however, agreement with statistical isotropy at much smaller angular scales. Here, we propose to use off-diagonal components of the angular power spectrum of the 21-cm fluctuations during the dark ages to test this power asymmetry. We forecast results for the planned SKA radio array, a future radio array, and the cosmic-variance-limited case as a theoretical proof of principle. Our results show that the 21-cm line power spectrum will enable access to information at very small scales and at different redshift slices, thus improving upon the current CMB constraints by ˜2 orders of magnitude for a dipolar asymmetry and by ˜1 - 3 orders of magnitude for a quadrupolar asymmetry case.

  17. The 21-cm BAO signature of enriched low-mass galaxies during cosmic reionization

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan

    2016-06-01

    Studies of the formation of the first stars have established that they formed in small haloes of ˜105-106 M⊙ via molecular hydrogen cooling. Since a low level of ultraviolet radiation from stars suffices to dissociate molecular hydrogen, under the usually assumed scenario this primordial mode of star formation ended by redshift z ˜ 15 and much more massive haloes came to dominate star formation. However, metal enrichment from the first stars may have allowed the smaller haloes to continue to form stars. In this Letter, we explore the possible effect of star formation in metal-rich low-mass haloes on the redshifted 21-cm signal of neutral hydrogen from z = 6 to 40. These haloes are significantly affected by the supersonic streaming velocity, with its characteristic baryon acoustic oscillation (BAO) signature. Thus, enrichment of low-mass galaxies can produce a strong signature in the 21-cm power spectrum over a wide range of redshifts, especially if star formation in the small haloes was more efficient than suggested by current simulations. We show that upcoming radio telescopes can easily distinguish among various possible scenarios.

  18. The existence and detection of optically dark galaxies by 21-cm surveys

    NASA Astrophysics Data System (ADS)

    Davies, J. I.; Disney, M. J.; Minchin, R. F.; Auld, R.; Smith, R.

    2006-05-01

    One explanation for the disparity between cold dark matter (CDM) predictions of galaxy numbers and observations could be that there are numerous dark galaxies in the Universe. These galaxies may still contain baryons, but no stars, and may be detectable in the 21-cm line of atomic hydrogen. The results of surveys for such objects, and simulations that do/do not predict their existence, are controversial. In this paper, we use an analytical model of galaxy formation, consistent with CDM, to first show that dark galaxies are certainly a prediction of the model. Secondly, we show that objects like VIRGOHI21, a dark galaxy candidate recently discovered by us, while rare are predicted by the model. Thirdly, we show that previous `blind' HI surveys have placed few constraints on the existence of dark galaxies. This is because they have either lacked the sensitivity and/or velocity resolution or have not had the required detailed optical follow up. We look forward to new 21-cm blind surveys [Arecibo Legacy Fast ALFA (ALFALFA) survey and Arecibo Galactic Environments Survey (AGES)] using the Arecibo multibeam instrument which should find large numbers of dark galaxies if they exist.

  19. Signatures of clumpy dark matter in the global 21 cm background signal

    SciTech Connect

    Cumberbatch, Daniel T.; Lattanzi, Massimiliano; Silk, Joseph

    2010-11-15

    We examine the extent to which the self-annihilation of supersymmetric neutralino dark matter, as well as light dark matter, influences the rate of heating, ionization, and Lyman-{alpha} pumping of interstellar hydrogen and helium and the extent to which this is manifested in the 21 cm global background signal. We fully consider the enhancements to the annihilation rate from dark matter halos and substructures within them. We find that the influence of such structures can result in significant changes in the differential brightness temperature, {delta}T{sub b}. The changes at redshifts z<25 are likely to be undetectable due to the presence of the astrophysical signal; however, in the most favorable cases, deviations in {delta}T{sub b}, relative to its value in the absence of self-annihilating dark matter, of up to {approx_equal}20 mK at z=30 can occur. Thus we conclude that, in order to exclude these models, experiments measuring the global 21 cm signal, such as EDGES and CORE, will need to reduce the systematics at 50 MHz to below 20 mK.

  20. Limits on foreground subtraction from chromatic beam effects in global redshifted 21 cm measurements

    NASA Astrophysics Data System (ADS)

    Mozdzen, T. J.; Bowman, J. D.; Monsalve, R. A.; Rogers, A. E. E.

    2016-02-01

    Foreground subtraction in global redshifted 21 cm measurements is limited by frequency-dependent (chromatic) structure in antenna beam patterns. Chromatic beams couple angular structures in Galactic foreground emission to spectral structures that may not be removed by smooth functional forms. We report results for simulations based on two dipole antennas used by the Experiment to Detect the Global EoR Signature (EDGES). The residual levels in simulated foreground-subtracted spectra are found to differ substantially between the two antennas, suggesting that antenna design must be carefully considered. Residuals are also highly dependent on the right ascension and declination of the antenna pointing, with rms values differing by as much as a factor of 20 across pointings. For EDGES and other ground-based experiments with zenith pointing antennas, right ascension and declination correspond directly to the local sidereal time and the latitude of the deployment site, hence chromatic beam effects should be taken into account when selecting sites. We introduce the `blade' dipole antenna and show, via simulations, that it has better chromatic performance than the `fourpoint' antenna previously used for EDGES. The blade antenna yields 1-5 mK residuals across the entire sky after a 5-term polynomial is removed from simulated spectra, whereas the fourpoint antenna typically requires a 6-term polynomial for comparable residuals. For both antennas, the signal-to-noise ratio of recovered 21 cm input signals peaks for a 5-term polynomial foreground fit given realistic thermal noise levels.

  1. PAPER-64 Constraints on Reionization: The 21 cm Power Spectrum at z = 8.4

    NASA Astrophysics Data System (ADS)

    Ali, Zaki S.; Parsons, Aaron R.; Zheng, Haoxuan; Pober, Jonathan C.; Liu, Adrian; Aguirre, James E.; Bradley, Richard F.; Bernardi, Gianni; Carilli, Chris L.; Cheng, Carina; DeBoer, David R.; Dexter, Matthew R.; Grobbelaar, Jasper; Horrell, Jasper; Jacobs, Daniel C.; Klima, Pat; MacMahon, David H. E.; Maree, Matthys; Moore, David F.; Razavi, Nima; Stefan, Irina I.; Walbrugh, William P.; Walker, Andre

    2015-08-01

    In this paper, we report new limits on 21 cm emission from cosmic reionization based on a 135 day observing campaign with a 64-element deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. This work extends the work presented in Parsons et al. with more collecting area, a longer observing period, improved redundancy-based calibration, improved fringe-rate filtering, and updated power-spectral analysis using optimal quadratic estimators. The result is a new 2σ upper limit on Δ2(k) of (22.4 mK)2 in the range 0.15\\lt k\\lt 0.5h {{Mpc}}-1 at z = 8.4. This represents a three-fold improvement over the previous best upper limit. As we discuss in more depth in a forthcoming paper, this upper limit supports and extends previous evidence against extremely cold reionization scenarios. We conclude with a discussion of implications for future 21 cm reionization experiments, including the newly funded Hydrogen Epoch of Reionization Array.

  2. The 21cm power spectrum and the shapes of non-Gaussianity

    SciTech Connect

    Chongchitnan, Sirichai

    2013-03-01

    We consider how measurements of the 21cm radiation from the epoch of reionization (z = 8−12) can constrain the amplitudes of various 'shapes' of primordial non-Gaussianity. The limits on these shapes, each parametrized by the non-linear parameter f{sub NL}, can reveal whether the physics of inflation is more complex than the standard single-field, slow-roll scenario. In this work, we quantify the effects of the well-known local, equilateral, orthogonal and folded types of non-Gaussianities on the 21cm power spectrum, which is expected to be measured by upcoming radio arrays such as the Square-Kilometre Array (SKA). We also assess the prospects of the SKA in constraining these non-Gaussianities, and found constraints that are comparable with those from cosmic-microwave-background experiments such as Planck. We show that the limits on various f{sub NL} can be tightened to O(1) using a radio array with a futuristic but realistic set of specifications.

  3. A WSRT 21 CM deep survey of two fields in Hercules

    NASA Astrophysics Data System (ADS)

    Oort, M. J. A.; van Langevelde, H. J.

    1987-10-01

    A deep 21 cm survey, carried out with the Westerbork Synthesis Radio Telescope (WSRT), of two fields in the constellation of Hercules is presented. These areas were observed previously at 21 cm in the Leiden-Berkeley Deep Survey (LBDS), (Windhorst et al., 1984), but with a factor of three higher noise level. A complete sample is defined, containing 116 radio sources with a peak flux above 5 sigma, within the -7dB attenuation radius (0.464 deg). This complete sample is used to determine the 1412 MHz source counts down to 0.45 mJy. The counts from the current sample show the same small scale structure at about 1 mJy, as was found in previous surveys. A direct comparison is made with the LBDS observations of the same fields. It is shown that the 5 sigma peak flux cut-off in the complete sample is not stringent enough to sufficiently avoid contamination by spurious sources, especially when strong (S of not less than 100 mJy) sources are present in the field. Finally, a search was made for the variable sources.

  4. Intensity Based Seismic Hazard Map of Republic of Macedonia

    NASA Astrophysics Data System (ADS)

    Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta

    2016-04-01

    The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the

  5. Neutral hydrogen in galaxy clusters: impact of AGN feedback and implications for intensity mapping

    NASA Astrophysics Data System (ADS)

    Villaescusa-Navarro, Francisco; Planelles, Susana; Borgani, Stefano; Viel, Matteo; Rasia, Elena; Murante, Giuseppe; Dolag, Klaus; Steinborn, Lisa K.; Biffi, Veronica; Beck, Alexander M.; Ragone-Figueroa, Cinthia

    2016-03-01

    By means of zoom-in hydrodynamic simulations, we quantify the amount of neutral hydrogen (H I) hosted by groups and clusters of galaxies. Our simulations, which are based on an improved formulation of smoothed particle hydrodynamics, include radiative cooling, star formation, metal enrichment and supernova feedback, and can be split into two different groups, depending on whether feedback from active galactic nuclei (AGN) is turned on or off. Simulations are analysed to account for H I self-shielding and the presence of molecular hydrogen. We find that the mass in neutral hydrogen of dark matter haloes monotonically increases with the halo mass and can be well described by a power law of the form M_{H I}(M,z)∝ M^{3/4}. Our results point out that AGN feedback reduces both the total halo mass and its H I mass, although it is more efficient in removing H I. We conclude that AGN feedback reduces the neutral hydrogen mass of a given halo by ˜50 per cent, with a weak dependence on halo mass and redshift. The spatial distribution of neutral hydrogen within haloes is also affected by AGN feedback, whose effect is to decrease the fraction of H I that resides in the halo inner regions. By extrapolating our results to haloes not resolved in our simulations, we derive astrophysical implications from the measurements of Ω _{H I}(z): haloes with circular velocities larger than ˜25 km s-1 are needed to host H I in order to reproduce observations. We find that only the model with AGN feedback is capable of reproducing the value of Ω _{H I}b_{H I} derived from available 21 cm intensity mapping observations.

  6. A synthetic 21-cm Galactic Plane Survey of a smoothed particle hydrodynamics galaxy simulation

    NASA Astrophysics Data System (ADS)

    Douglas, Kevin A.; Acreman, David M.; Dobbs, Clare L.; Brunt, Christopher M.

    2010-09-01

    We have created synthetic neutral hydrogen (HI) Galactic Plane Survey data cubes covering 90° <= l <= 180°, using a model spiral galaxy from smoothed particle hydrodynamics (SPH) simulations and the radiative transfer code TORUS. The density, temperature and other physical parameters are fed from the SPH simulation into TORUS, where the HI emissivity and opacity are calculated before the 21-cm line emission profile is determined. Our main focus is the observation of outer Galaxy `Perseus arm' HI, with a view to tracing atomic gas as it encounters shock motions as it enters a spiral arm interface, an early step in the formation of molecular clouds. The observation of HI self-absorption features at these shock sites (in both real observations and our synthetic data) allows us to investigate further the connection between cold atomic gas and the onset of molecular cloud formation.

  7. Method for direct measurement of cosmic acceleration by 21-cm absorption systems.

    PubMed

    Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li

    2014-07-25

    So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively. PMID:25105607

  8. Strong RFI observed in protected 21 cm band at Zurich observatory, Switzerland

    NASA Astrophysics Data System (ADS)

    Monstein, C.

    2014-03-01

    While testing a new antenna control software tool, the telescope was moved to the most western azimuth position pointing to our own building. While de-accelerating the telescope, the spectrometer showed strong broadband radio frequency interference (RFI) and two single-frequency carriers around 1412 and 1425 MHz, both of which are in the internationally protected band. After lengthy analysis it was found out, that the Webcam AXIS2000 was the source for both the broadband and single-frequency interference. Switching off the Webcam solved the problem immediately. So, for future observations of 21 cm radiation, all nearby electronics has to be switched off. Not only the Webcam but also all unused PCs, printers, networks, monitors etc.

  9. First Limits on the 21 cm Power Spectrum during the Epoch of X-ray heating.

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, A.; Dillon, Joshua S.; Hewitt, J. N.; Loeb, A.; Mesinger, A.; Neben, A. R.; Offringa, A. R.; Tegmark, M.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hurley-Walker, N.; Johnston-Hollitt, M.; Jacobs, Daniel C.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Thyagarajan, Nithyanandan; Oberoi, D.; Ord, S. M.; Paul, S.; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-05-01

    We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). Three hours of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1-3 hours of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of 104 mK on comoving scales k ≲ 0.5 hMpc-1. This represents the first upper limits on the 21 cm power spectrum fluctuations at redshifts 12 ≲ z ≲ 18 but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.

  10. Linear and Circular polarization of CMB and cosmic 21cm radiation

    NASA Astrophysics Data System (ADS)

    De, Soma; Vachaspati, T.; Pogosian, L.; Tashiro, H.

    2014-01-01

    I will discuss the effect of galactic and primordial magnetic field on the linear polarization of CMB. Faraday Rotation (FR) of CMB polarization, as measured through mode-coupling correlations of E and B modes, can be a promising probe of a stochastic primordial magnetic field (PMF). We use existing estimates of the Milky Way rotation measure (RM) to forecast its detectability with upcoming and future CMB experiments. We find that a realistic future sub-orbital experiment, covering a patch of the sky near the galactic poles, can detect a scale-invariant PMF of 0.1 nano-Gauss at better than 95% confidence level. Next I'll discuss how the galactic magnetic field affects polarization of 21 cm. Unpolarized 21 cm radiation acquires a certain level of linear polarization during the EoR due to Thompson scattering. This linear polarization, if measured, could probe important information about the EoR. We show that a 99 % accuracy on galactic rotation measure (RM) data is necessary to recover the initial E-mode signal. I will conclude my talk by addressing the very interesting question of if CMB can be circularly polarized due to the secondary effects along the line of sight. As the CMB passes through galaxies and galaxy clusters, which could generate a circular polarization by the method of Faraday conversion (FC) (Pacholczyk, 1998, Cooray et al, 2002). Particularly explosions of first stars can induce circular polarization (due to Faraday conversion) and it has no strong local foreground. The unique frequency dependence of FC signal will allow one to eliminate other possible sources of circular polarization enabling to probe the first star explosions.

  11. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    SciTech Connect

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

    2012-09-20

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional power spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.

  12. First limits on the 21 cm power spectrum during the Epoch of X-ray heating

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, A.; Dillon, Joshua S.; Hewitt, J. N.; Loeb, A.; Mesinger, A.; Neben, A. R.; Offringa, A. R.; Tegmark, M.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hurley-Walker, N.; Johnston-Hollitt, M.; Jacobs, Daniel C.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Thyagarajan, Nithyanandan; Oberoi, D.; Ord, S. M.; Paul, S.; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-08-01

    We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). Three hours of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1-3 hours of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of $10^4$ mK on comoving scales $k\\lesssim 0.5 h$Mpc$^{-1}$. This represents the first upper limits on the $21$ cm power spectrum fluctuations at redshifts $12\\lesssim z \\lesssim 18$ but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.

  13. Reionization and beyond: detecting the peaks of the cosmological 21 cm signal

    NASA Astrophysics Data System (ADS)

    Mesinger, Andrei; Ewall-Wice, Aaron; Hewitt, Jacqueline

    2014-04-01

    The cosmological 21 cm signal is set to become the most powerful probe of the early Universe, with first-generation interferometers aiming to make statistical detections of reionization. There is increasing interest also in the pre-reionization epoch when the intergalactic medium (IGM) was heated by an early X-ray background. Here, we perform parameter studies varying the halo masses capable of hosting galaxies and their X-ray production efficiencies. These two fundamental parameters control the timing and relative offset of reionization and IGM heating, making them the most relevant for predicting the signal during both epochs. We also relate these to popular models of warm dark matter cosmologies. For each parameter combination, we compute the signal-to-noise ratio (S/N) of the large-scale (k ˜ 0.1 Mpc-1) 21 cm power for both reionization and X-ray heating for a 2000 h observation with several instruments: 128 tile Murchison Wide Field Array (MWA128T), a 256 tile extension (MWA256T), the Low Frequency Array (LOFAR), the 128 element Precision Array for Probing the Epoch of Reionization (PAPER), and the second-generation Square Kilometre Array (SKA). We show that X-ray heating and reionization in many cases are of comparable detectability. For fiducial astrophysical parameters, MWA128T might detect X-ray heating, thanks to its extended bandpass. When it comes to reionization, both MWA128T and PAPER will also only achieve marginal detections, unless foregrounds on larger scales can be mitigated. On the other hand, LOFAR should detect plausible models of reionization at S/N > 10. The SKA will easily detect both X-ray heating and reionization.

  14. First limits on the 21 cm power spectrum during the Epoch of X-ray heating

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, A.; Dillon, Joshua S.; Hewitt, J. N.; Loeb, A.; Mesinger, A.; Neben, A. R.; Offringa, A. R.; Tegmark, M.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hurley-Walker, N.; Johnston-Hollitt, M.; Jacobs, Daniel C.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Thyagarajan, Nithyanandan; Oberoi, D.; Ord, S. M.; Paul, S.; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-08-01

    We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). 3 h of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1-3 h of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of 104 mK on comoving scales k ≲ 0.5 h Mpc-1. This represents the first upper limits on the 21 cm power spectrum fluctuations at redshifts 12 ≲ z ≲ 18 but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.

  15. Methods to Map Cropping Intensity Using MODIS Data (Invited)

    NASA Astrophysics Data System (ADS)

    Jain, M.; Mondal, P.; DeFries, R. S.; Small, C.; Galford, G. L.

    2013-12-01

    The food security of smallholder farmers is vulnerable to climate change and climate variability. Cropping intensity, the number of crops planted annually, can be used as a measure of food security for smallholder farmers given that it can greatly affect net production. Remote sensing tools and techniques offer a unique way to map cropping patterns over large spatial and temporal scales as well as in real time. Yet current techniques for quantifying cropping intensity using remote sensing may not accurately map smallholder farms where the size of one agricultural plot is typically smaller than the spatial resolution of readily available satellite data like MODIS (250 m) and sometimes Landsat (30 m). This presentation presents techniques to map cropping intensity by quantifying the amount of cropped area at a 1 x 1 km scale using MODIS satellite data in study regions in India. Specifically we present two methods to map cropped area, which are validated using higher-resolution Quickbird and Landsat data. The first method uses Landsat data to train MODIS data - while the method has fairly high accuracy (R2 > .80), it is difficult to automate over large spatial and temporal scales. The second method uses only MODIS data to quantify cropped area - this method is easy to automate over large spatial and temporal scales but has slightly reduced accuracy. To illustrate the utility of these methods, we present maps of cropping intensity across several regions in India and show how these data can be related to changes in cropped area through time with contemporaneous climate and irrigation data.

  16. Testing gravity at large scales with H I intensity mapping

    NASA Astrophysics Data System (ADS)

    Pourtsidou, Alkistis

    2016-09-01

    We investigate the possibility of testing Einstein's general theory of relativity (GR) and the standard cosmological model via the EG statistic using neutral hydrogen (H I) intensity mapping. We generalize the Fourier space estimator for EG to include H I as a biased tracer of matter and forecast statistical errors using H I clustering and lensing surveys that can be performed in the near future, in combination with ongoing and forthcoming optical galaxy and cosmic microwave background (CMB) surveys. We find that fractional errors <1 per cent in the EG measurement can be achieved in a number of cases and compare the ability of various survey combinations to differentiate between GR and specific modified gravity models. Measuring EG with intensity mapping and the Square Kilometre Array can provide exquisite tests of gravity at cosmological scales.

  17. Tests of the Tully-Fisher relation. 1: Scatter in infrared magnitude versus 21 cm width

    NASA Technical Reports Server (NTRS)

    Bernstein, Gary M.; Guhathakurta, Puragra; Raychaudhury, Somak; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry; Vogt, Nicole P.

    1994-01-01

    We examine the precision of the Tully-Fisher relation (TFR) using a sample of galaxies in the Coma region of the sky, and find that it is good to 5% or better in measuring relative distances. Total magnitudes and disk axis ratios are derived from H and I band surface photometry, and Arecibo 21 cm profiles define the rotation speeds of the galaxies. Using 25 galaxies for which the disk inclination and 21 cm width are well defined, we find an rms deviation of 0.10 mag from a linear TFR with dI/d(log W(sub c)) = -5.6. Each galaxy is assumed to be at a distance proportional to its redshift, and an extinction correction of 1.4(1-b/a) mag is applied to the total I magnitude. The measured scatter is less than 0.15 mag using milder extinction laws from the literature. The I band TFR scatter is consistent with measurement error, and the 95% CL limits on the intrinsic scatter are 0-0.10 mag. The rms scatter using H band magnitudes is 0.20 mag (N = 17). The low width galaxies have scatter in H significantly in excess of known measurement error, but the higher width half of the galaxies have scatter consistent with measurement error. The H band TFR slope may be as steep as the I band slope. As the first applications of this tight correlation, we note the following: (1) the data for the particular spirals commonly used to define the TFR distance to the Coma cluster are inconsistent with being at a common distance and are in fact in free Hubble expansion, with an upper limit of 300 km/s on the rms peculiar line-of-sight velocity of these gas-rich spirals; and (2) the gravitational potential in the disks of these galaxies has typical ellipticity less than 5%. The published data for three nearby spiral galaxies with Cepheid distance determinations are inconsistent with our Coma TFR, suggesting that these local calibrators are either ill-measured or peculiar relative to the Coma Supercluster spirals, or that the TFR has a varying form in different locales.

  18. Constraining high-redshift X-ray sources with next generation 21-cm power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Mesinger, Andrei; Dillon, Joshua S.; Liu, Adrian; Pober, Jonathan

    2016-05-01

    We use the Fisher matrix formalism and seminumerical simulations to derive quantitative predictions of the constraints that power spectrum measurements on next-generation interferometers, such as the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometre Array (SKA), will place on the characteristics of the X-ray sources that heated the high-redshift intergalactic medium. Incorporating observations between z = 5 and 25, we find that the proposed 331 element HERA and SKA phase 1 will be capable of placing ≲ 10 per cent constraints on the spectral properties of these first X-ray sources, even if one is unable to perform measurements within the foreground contaminated `wedge' or the FM band. When accounting for the enhancement in power spectrum amplitude from spin temperature fluctuations, we find that the observable signatures of reionization extend well beyond the peak in the power spectrum usually associated with it. We also find that lower redshift degeneracies between the signatures of heating and reionization physics lead to errors on reionization parameters that are significantly greater than previously predicted. Observations over the heating epoch are able to break these degeneracies and improve our constraints considerably. For these two reasons, 21-cm observations during the heating epoch significantly enhance our understanding of reionization as well.

  19. The 21-cm signature of the first stars during the Lyman-Werner feedback era

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Barkana, Rennan; Visbal, Eli; Tseliakhovich, Dmitriy; Hirata, Christopher M.

    2013-07-01

    The formation of the first stars is an exciting frontier area in astronomy. Early redshifts (z ˜ 20) have become observationally promising as a result of a recently recognized effect of a supersonic relative velocity between the dark matter and gas. This effect produces prominent structure on 100 comoving Mpc scales, which makes it much more feasible to detect 21-cm fluctuations from the epoch of first heating. We use semi-numerical hybrid methods to follow for the first time the joint evolution of the X-ray and Lyman-Werner radiative backgrounds, including the effect of the supersonic streaming velocity on the cosmic distribution of stars. We incorporate self-consistently the negative feedback on star formation induced by the Lyman-Werner radiation, which dissociates molecular hydrogen and thus suppresses gas cooling. We find that the feedback delays the X-ray heating transition by Δz ˜ 2, but leaves a promisingly large fluctuation signal over a broad redshift range. The large-scale power spectrum is predicted to reach a maximal signal-to-noise ratio of S/N ˜ 3-4 at z ˜ 18 (for a projected first-generation instrument), with S/N >1 out to z ˜ 22-23. We hope to stimulate additional numerical simulations as well as observational efforts focused on the epoch prior to cosmic reionization.

  20. Radio frequency interference at Jodrell Bank Observatory within the protected 21 cm band

    NASA Technical Reports Server (NTRS)

    Tarter, J.

    1989-01-01

    Radio frequency interference (RFI) will provide one of the most difficult challenges to systematic Searches for Extraterrestrial Intelligence (SETI) at microwave frequencies. The SETI-specific equipment is being optimized for the detection of signals generated by a technology rather than those generated by natural processes in the universe. If this equipment performs as expected, then it will inevitably detect many signals originating from terrestrial technology. If these terrestrial signals are too numerous and/or strong, the equipment will effectively be blinded to the (presumably) weaker extraterrestrial signals being sought. It is very difficult to assess how much of a problem RFI will actually represent to future observations, without employing the equipment and beginning the search. In 1983 a very high resolution spectrometer was placed at the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England. This equipment permitted an investigation of the interference environment at Jodrell Bank, at that epoch, and at frequencies within the 21 cm band. This band was chosen because it has long been "protected" by international agreement; no transmitters should have been operating at those frequencies. The data collected at Jodrell Bank were expected to serve as a "best case" interference scenario and provide the minimum design requirements for SETI equipment that must function in the real and noisy environment. This paper describes the data collection and analysis along with some preliminary conclusions concerning the nature of the interference environment at Jodrell Bank.

  1. Radio frequency interference at Jodrell Bank Observatory within the protected 21 cm band.

    PubMed

    Tarter, J

    1989-01-01

    Radio frequency interference (RFI) will provide one of the most difficult challenges to systematic Searches for Extraterrestrial Intelligence (SETI) at microwave frequencies. The SETI-specific equipment is being optimized for the detection of signals generated by a technology rather than those generated by natural processes in the universe. If this equipment performs as expected, then it will inevitably detect many signals originating from terrestrial technology. If these terrestrial signals are too numerous and/or strong, the equipment will effectively be blinded to the (presumably) weaker extraterrestrial signals being sought. It is very difficult to assess how much of a problem RFI will actually represent to future observations, without employing the equipment and beginning the search. In 1983 a very high resolution spectrometer was placed at the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England. This equipment permitted an investigation of the interference environment at Jodrell Bank, at that epoch, and at frequencies within the 21 cm band. This band was chosen because it has long been "protected" by international agreement; no transmitters should have been operating at those frequencies. The data collected at Jodrell Bank were expected to serve as a "best case" interference scenario and provide the minimum design requirements for SETI equipment that must function in the real and noisy environment. This paper describes the data collection and analysis along with some preliminary conclusions concerning the nature of the interference environment at Jodrell Bank. PMID:11537747

  2. 21 cm Synthesis Observations of VIRGOHI 21-A Possible Dark Galaxy in the Virgo Cluster

    NASA Astrophysics Data System (ADS)

    Minchin, Robert; Davies, Jonathan; Disney, Michael; Grossi, Marco; Sabatini, Sabina; Boyce, Peter; Garcia, Diego; Impey, Chris; Jordan, Christine; Lang, Robert; Marble, Andrew; Roberts, Sarah; van Driel, Wim

    2007-12-01

    Many observations indicate that dark matter dominates the extragalactic universe, yet no totally dark structure of galactic proportions has ever been convincingly identified. Previously, we have suggested that VIRGOHI 21, a 21 cm source we found in the Virgo Cluster using Jodrell Bank, was a possible dark galaxy because of its broad line width (~200 km s-1) unaccompanied by any visible gravitational source to account for it. We have now imaged VIRGOHI 21 in the neutral hydrogen line and find what could be a dark, edge-on, spinning disk with the mass and diameter of a typical spiral galaxy. Moreover, VIRGOHI 21 has unquestionably been involved in an interaction with NGC 4254, a luminous spiral with an odd one-armed morphology, but lacking the massive interactor normally linked with such a feature. Numerical models of NGC 4254 call for a close interaction ~108 yr ago with a perturber of ~1011 Msolar. This we take as additional evidence for the massive nature of VIRGOHI 21, as there does not appear to be any other viable candidate. We have also used the Hubble Space Telescope to search for stars associated with the H I and find none down to an I-band surface brightness limit of 31.1+/-0.2 mag arcsec-2.

  3. a Dark Galaxy in the Virgo Cluster Imaged at 21-CM

    NASA Astrophysics Data System (ADS)

    Minchin, R.; Disney, M. J.; Davies, J. I.; Marble, A. R.; Impey, C. D.; Boyce, P. J.; Garcia, D. A.; Grossi, M.; Jordan, C. A.; Lang, R. H.; Roberts, S.; Sabatini, S.; van Driel, W.

    Dark Matter supposedly dominates the extragalactic Universe (Peebles 1993; Peacock 1998; Moore et al. 1999; D'Onghi & Lake 2004), yet no dark structure of galactic proportions has ever been convincingly identified. Earlier (Minchin et al. 2005) we suggested that VIRGOHI 21, a 21-cm source we found in the Virgo Cluster at Jodrell Bank using single-dish observations (Davies et al. 2004), was probably such a dark galaxy because of its broad line-width (~200 km s-1) unaccompanied by any visible gravitational source to account for it. We have now imaged VIRGOHI 21 in the neutral-hydrogen line, and have found what appears to be a dark, edge-on, spinning disc with the mass and diameter of a typical spiral galaxy. Moreover the disc has unquestionably interacted with NGC 4254, a luminous spiral with an odd one-armed morphology, but lacking the massive interactor normally linked with such a feature. Published numerical models (Vollmer et al. 2005) of NGC 4254 call for a close interaction ~108 years ago with a perturber of ~1011 solar masses. This we take as further, independent evidence for the massive nature of VIRGOHI 21.

  4. Scintillation noise power spectrum and its impact on high-redshift 21-cm observations

    NASA Astrophysics Data System (ADS)

    Vedantham, H. K.; Koopmans, L. V. E.

    2016-05-01

    Visibility scintillation resulting from wave propagation through the turbulent ionosphere can be an important source of noise at low radio frequencies (ν ≲ 200 MHz). Many low-frequency experiments are underway to detect the power spectrum of brightness temperature fluctuations of the neutral-hydrogen 21-cm signal from the Epoch of Reionization (EoR: 12 ≳ z ≳ 7, 100 ≲ ν ≲ 175 MHz). In this paper, we derive scintillation noise power spectra in such experiments while taking into account the effects of typical data processing operations such as self-calibration and Fourier synthesis. We find that for minimally redundant arrays such as LOFAR and MWA, scintillation noise is of the same order of magnitude as thermal noise, has a spectral coherence dictated by stretching of the snapshot uv-coverage with frequency, and thus is confined to the well-known wedge-like structure in the cylindrical (two-dimensional) power spectrum space. Compact, fully redundant (dcore ≲ rF ≈ 300 m at 150 MHz) arrays such as HERA and SKA-LOW (core) will be scintillation noise dominated at all baselines, but the spatial and frequency coherence of this noise will allow it to be removed along with spectrally smooth foregrounds.

  5. 21 cm signal from cosmic dawn: imprints of spin temperature fluctuations and peculiar velocities

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.

    2015-02-01

    The 21 cm brightness temperature δTb fluctuations from reionization promise to provide information on the physical processes during that epoch. We present a formalism for generating the δTb distribution using dark matter simulations and a 1D radiative transfer code. Our analysis is able to account for the spin temperature TS fluctuations arising from inhomogeneous X-ray heating and Lyα coupling during cosmic dawn. The δTb power spectrum amplitude at large scales (k ˜ 0.1 Mpc-1) is maximum when ˜10 per cent of the gas (by volume) is heated above the cosmic microwave background temperature. The power spectrum shows a `bump'-like feature during cosmic dawn and its location measures the typical sizes of heated regions. We find that the effect of peculiar velocities on the power spectrum is negligible at large scales for most part of the reionization history. During early stages (when the volume averaged ionization fraction ≲ 0.2) this is because the signal is dominated by fluctuations in TS. For reionization models that are solely driven by stars within high-mass (≳ 109 M⊙) haloes, the peculiar velocity effects are prominent only at smaller scales (k ≳ 0.4 Mpc-1) where patchiness in the neutral hydrogen density dominates the signal. The conclusions are unaffected by changes in the amplitude or steepness in the X-ray spectra of the sources.

  6. Empirical covariance modeling for 21 cm power spectrum estimation: A method demonstration and new limits from early Murchison Widefield Array 128-tile data

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Neben, Abraham R.; Hewitt, Jacqueline N.; Tegmark, Max; Barry, N.; Beardsley, A. P.; Bowman, J. D.; Briggs, F.; Carroll, P.; de Oliveira-Costa, A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hernquist, L.; Hurley-Walker, N.; Jacobs, D. C.; Kim, H. S.; Kittiwisit, P.; Lenc, E.; Line, J.; Loeb, A.; McKinley, B.; Mitchell, D. A.; Morales, M. F.; Offringa, A. R.; Paul, S.; Pindor, B.; Pober, J. C.; Procopio, P.; Riding, J.; Sethi, S.; Shankar, N. Udaya; Subrahmanyan, R.; Sullivan, I.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C.; Wayth, R. B.; Webster, R. L.; Wyithe, S.; Bernardi, G.; Cappallo, R. J.; Deshpande, A. A.; Johnston-Hollitt, M.; Kaplan, D. L.; Lonsdale, C. J.; McWhirter, S. R.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Srivani, K. S.; Williams, A.; Williams, C. L.

    2015-06-01

    The separation of the faint cosmological background signal from bright astrophysical foregrounds remains one of the most daunting challenges of mapping the high-redshift intergalactic medium with the redshifted 21 cm line of neutral hydrogen. Advances in mapping and modeling of diffuse and point source foregrounds have improved subtraction accuracy, but no subtraction scheme is perfect. Precisely quantifying the errors and error correlations due to missubtracted foregrounds allows for both the rigorous analysis of the 21 cm power spectrum and for the maximal isolation of the "EoR window" from foreground contamination. We present a method to infer the covariance of foreground residuals from the data itself in contrast to previous attempts at a priori modeling. We demonstrate our method by setting limits on the power spectrum using a 3 h integration from the 128-tile Murchison Widefield Array. Observing between 167 and 198 MHz, we find at 95% confidence a best limit of Δ2(k )<3.7 ×104 mK2 at comoving scale k =0.18 h Mpc-1 and at z =6.8 , consistent with existing limits.

  7. EXPLORING THE COSMIC REIONIZATION EPOCH IN FREQUENCY SPACE: AN IMPROVED APPROACH TO REMOVE THE FOREGROUND IN 21 cm TOMOGRAPHY

    SciTech Connect

    Wang, Jingying; Xu, Haiguang; Guo, Xueying; Li, Weitian; Liu, Chengze; An, Tao; Wang, Yu; Gu, Junhua; Martineau-Huynh, Olivier; Wu, Xiang-Ping E-mail: zishi@sjtu.edu.cn

    2013-02-15

    With the intent of correctly restoring the redshifted 21 cm signals emitted by neutral hydrogen during the cosmic reionization processes, we re-examine the separation approaches based on the quadratic polynomial fitting technique in frequency space in order to investigate whether they work satisfactorily with complex foreground by quantitatively evaluating the quality of restored 21 cm signals in terms of sample statistics. We construct the foreground model to characterize both spatial and spectral substructures of the real sky, and use it to simulate the observed radio spectra. By comparing between different separation approaches through statistical analysis of restored 21 cm spectra and corresponding power spectra, as well as their constraints on the mean halo bias b and average ionization fraction x{sub e} of the reionization processes, at z = 8 and the noise level of 60 mK we find that although the complex foreground can be well approximated with quadratic polynomial expansion, a significant part of the Mpc-scale components of the 21 cm signals (75% for {approx}> 6 h {sup -1} Mpc scales and 34% for {approx}> 1 h {sup -1} Mpc scales) is lost because it tends to be misidentified as part of the foreground when the single-narrow-segment separation approach is applied. The best restoration of the 21 cm signals and the tightest determination of b and x{sub e} can be obtained with the three-narrow-segment fitting technique as proposed in this paper. Similar results can be obtained at other redshifts.

  8. Sensitive 21cm Observations of Neutral Hydrogen in the Local Group near M31

    NASA Astrophysics Data System (ADS)

    Wolfe, Spencer A.; Lockman, Felix J.; Pisano, D. J.

    2016-01-01

    Very sensitive 21 cm H i measurements have been made at several locations around the Local Group galaxy M31 using the Green Bank Telescope at an angular resolution of 9.‧1, with a 5σ detection level of NH i = 3.9 × 1017 cm-2 for a 30 km s-1 line. Most of the H i in a 12 square-degree area almost equidistant between M31 and M33 is contained in nine discrete clouds that have a typical size of a few kpc and a H i mass of 105M⊙. Their velocities in the Local Group Standard of Rest lie between -100 and +40 km s-1, comparable to the systemic velocities of M31 and M33. The clouds appear to be isolated kinematically and spatially from each other. The total H i mass of all nine clouds is 1.4 × 106M⊙ for an adopted distance of 800 kpc, with perhaps another 0.2 × 106M⊙ in smaller clouds or more diffuse emission. The H i mass of each cloud is typically three orders of magnitude less than the dynamical (virial) mass needed to bind the cloud gravitationally. Although they have the size and H i mass of dwarf galaxies, the clouds are unlikely to be part of the satellite system of the Local Group, as they lack stars. To the north of M31, sensitive H i measurements on a coarse grid find emission that may be associated with an extension of the M31 high-velocity cloud (HVC) population to projected distances of ˜100 kpc. An extension of the M31 HVC population at a similar distance to the southeast, toward M33, is not observed.

  9. A Practical Theorem on Using Interferometry to Measure the Global 21-cm Signal

    NASA Astrophysics Data System (ADS)

    Venumadhav, Tejaswi; Chang, Tzu-Ching; Doré, Olivier; Hirata, Christopher M.

    2016-08-01

    The sky-averaged, or global, background of redshifted 21 cm radiation is expected to be a rich source of information on cosmological reheating and reionization. However, measuring the signal is technically challenging: one must extract a small, frequency-dependent signal from under much brighter spectrally smooth foregrounds. Traditional approaches to study the global signal have used single antennas, which require one to calibrate out the frequency-dependent structure in the overall system gain (due to internal reflections, for example) as well as remove the noise bias from auto-correlating a single amplifier output. This has motivated proposals to measure the signal using cross-correlations in interferometric setups, where additional calibration techniques are available. In this paper we focus on the general principles driving the sensitivity of the interferometric setups to the global signal. We prove that this sensitivity is directly related to two characteristics of the setup: the cross-talk between readout channels (i.e., the signal picked up at one antenna when the other one is driven) and the correlated noise due to thermal fluctuations of lossy elements (e.g., absorbers or the ground) radiating into both channels. Thus in an interferometric setup, one cannot suppress cross-talk and correlated thermal noise without reducing sensitivity to the global signal by the same factor—instead, the challenge is to characterize these effects and their frequency dependence. We illustrate our general theorem by explicit calculations within toy setups consisting of two short-dipole antennas in free space and above a perfectly reflecting ground surface, as well as two well-separated identical lossless antennas arranged to achieve zero cross-talk.

  10. SPECTRAL POLARIZATION OF THE REDSHIFTED 21 cm ABSORPTION LINE TOWARD 3C 286

    SciTech Connect

    Wolfe, Arthur M.; Jorgenson, Regina A.; Robishaw, Timothy; Heiles, Carl; Xavier Prochaska, J. E-mail: raj@ast.cam.ac.uk E-mail: heiles@astro.berkeley.edu

    2011-05-20

    A reanalysis of the Stokes-parameter spectra obtained of the z = 0.692 21 cm absorption line toward 3C 286 shows that our original claimed detection of Zeeman splitting by a line-of-sight magnetic field, B{sub los} = 87 {mu}G, is incorrect. Because of an insidious software error, what we reported as Stokes V is actually Stokes U: the revised Stokes V spectrum indicates a 3{sigma} upper limit of B{sub los}< 17 {mu}G. The correct analysis reveals an absorption feature in fractional polarization that is offset in velocity from the Stokes I spectrum by -1.9 km s{sup -1}. The polarization position-angle spectrum shows a dip that is also significantly offset from the Stokes I feature, but at a velocity that differs slightly from the absorption feature in fractional polarization. We model the absorption feature with three velocity components against the core-jet structure of 3C 286. Our {chi}{sup 2} minimization fitting results in components with differing (1) ratios of H I column density to spin temperature, (2) velocity centroids, and (3) velocity dispersions. The change in polarization position angle with frequency implies incomplete coverage of the background jet source by the absorber. It also implies a spatial variation of the polarization position angle across the jet source, which is observed at frequencies higher than the 839.4 MHz absorption frequency. The multi-component structure of the gas is best understood in terms of components with spatial scales of {approx}100 pc comprised of hundreds of low-temperature (T {<=} 200 K) clouds with linear dimensions of <<100 pc. We conclude that previous attempts to model the foreground gas with a single uniform cloud are incorrect.

  11. Coaxing cosmic 21 cm fluctuations from the polarized sky using m -mode analysis

    NASA Astrophysics Data System (ADS)

    Shaw, J. Richard; Sigurdson, Kris; Sitwell, Michael; Stebbins, Albert; Pen, Ue-Li

    2015-04-01

    In this paper we continue to develop the m -mode formalism, a technique for efficient and optimal analysis of wide-field transit radio telescopes, targeted at 21 cm cosmology. We extend this formalism to give an accurate treatment of the polarized sky, fully accounting for the effects of polarization leakage and cross polarization. We use the geometry of the measured set of visibilities to project down to pure temperature modes on the sky, serving as a significant compression, and an effective first filter of polarized contaminants. As in our previous work, we use the m -mode formalism with the Karhunen-Loève transform to give a highly efficient method for foreground cleaning, and demonstrate its success in cleaning realistic polarized skies observed with an instrument suffering from substantial off axis polarization leakage. We develop an optimal quadratic estimator in the m -mode formalism which can be efficiently calculated using a Monte Carlo technique. This is used to assess the implications of foreground removal for power spectrum constraints where we find that our method can clean foregrounds well below the foreground wedge, rendering only scales k∥<0.02 h Mpc-1 inaccessible. As this approach assumes perfect knowledge of the telescope, we perform a conservative test of how essential this is by simulating and analyzing data sets with deviations about our assumed telescope. Assuming no other techniques to mitigate bias are applied, we find we recover unbiased power spectra when the per-feed beamwidth to be measured to 0.1%, and amplifier gains to be known to 1% within each minute. Finally, as an example application, we extend our forecasts to a wideband 400-800 MHz cosmological observation and consider the implications for probing dark energy, finding a pathfinder-scale medium-sized cylinder telescope improves the Dark Energy Task Force figure of merit by around 70% over Planck and Stage II experiments alone.

  12. Combining Optical and 21 cm Observations: A Study of Baryons in Galaxies

    NASA Astrophysics Data System (ADS)

    Faith Horne, Lisa; Zeh, P.; Rosenberg, J. L.; West, A. A.; ALFALFA Team

    2009-01-01

    This poster presents the first look at combining data from the Arecibo Legacy Fast ALFA (ALFALFA), a blind HI 21cm radio survey, with optical data from the Sloan Digital Sky Survey (SDSS). The goal of the project is to study the state of baryonic mass in galaxies in order to provide a better understanding of the evolution of gas into stars. Optical surveys tend to overlook some gas-rich galaxies such as low surface brightness galaxies because these systems are too low-contrast to easily be identified by their starlight while HI surveys can easily identify such objects by the gas that they contain. However, HI surveys tend to miss elliptical and spheroidal galaxies that have little gas. Therefore, the combination of the ALFALFA and SDSS data will allow a wider selection of objects to be detected and studied than would be possible with only one survey or the other. The data presented here are taken from one region of sky where ALFALFA and SDSS overlap. The environments probed in this region include the Great Wall and the low-density region in front of the Great Wall. It is found that this region contains a variety of galaxies from very dim, gas-deprived ellipticals to extremely bright, gas-rich spirals. We present measurements of HI mass, optical luminosity, and velocity width for galaxies in the sample and examine the relationship between these quantities. ALFALFA, PIs Giovanelli and Haynes, is a legacy survey funded by NAIC and NSF. SDSS is a legacy survey managed by the Astrophysical Research Consortium for the Participating Institutions.

  13. The impact of spin-temperature fluctuations on the 21-cm moments

    NASA Astrophysics Data System (ADS)

    Watkinson, C. A.; Pritchard, J. R.

    2015-12-01

    This paper considers the impact of Lyman α coupling and X-ray heating on the 21-cm brightness-temperature one-point statistics (as predicted by seminumerical simulations). The X-ray production efficiency is varied over four orders of magnitude and the hardness of the X-ray spectrum is varied from that predicted for high-mass X-ray binaries, to the softer spectrum expected from the hot interstellar medium. We find peaks in the redshift evolution of both the variance and skewness associated with the efficiency of X-ray production. The amplitude of the variance is also sensitive to the hardness of the X-ray spectral energy distribution. We find that the relative timing of the coupling and heating phases can be inferred from the redshift extent of a plateau that connects a peak in the variance's evolution associated with Lyman α coupling to the heating peak. Importantly, we find that late X-ray heating would seriously hamper our ability to constrain reionization with the variance. Late X-ray heating also qualitatively alters the evolution of the skewness, providing a clean way to constrain such models. If foregrounds can be removed, we find that LOFAR, MWA and PAPER could constrain reionization and late X-ray heating models with the variance. We find that HERA and SKA (phase 1) will be able to constrain both reionization and heating by measuring the variance using foreground-avoidance techniques. If foregrounds can be removed they will also be able to constrain the nature of Lyman α coupling.

  14. A Flux Scale for Southern Hemisphere 21 cm Epoch of Reionization Experiments

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki; Bowman, Judd; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Klima, Pat; MacMahon, Dave H. E.; Manley, Jason R.; Moore, David F.; Pober, Jonathan C.; Stefan, Irina I.; Walbrugh, William P.

    2013-10-01

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from -46° to -40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of -0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  15. A Knowledge Intensive Approach to Mapping Clinical Narrative to LOINC

    PubMed Central

    Fiszman, Marcelo; Shin, Dongwook; Sneiderman, Charles A.; Jin, Honglan; Rindflesch, Thomas C.

    2010-01-01

    Many natural language processing systems are being applied to clinical text, yet clinically useful results are obtained only by honing a system to a particular context. We suggest that concentration on the information needed for this processing is crucial and present a knowledge intensive methodology for mapping clinical text to LOINC. The system takes published case reports as input and maps vital signs and body measurements and reports of diagnostic procedures to fully specified LOINC codes. Three kinds of knowledge are exploited: textual, ontological, and pragmatic (including information about physiology and the clinical process). Evaluation on 4809 sentences yielded precision of 89% and recall of 93% (F-score 0.91). Our method could form the basis for a system to provide semi-automated help to human coders. PMID:21346974

  16. Constraining the unexplored period between the dark ages and reionization with observations of the global 21 cm signal

    SciTech Connect

    Pritchard, Jonathan R.; Loeb, Abraham

    2010-07-15

    Observations of the frequency dependence of the global brightness temperature of the redshifted 21 cm line of neutral hydrogen may be possible with single dipole experiments. In this paper, we develop a Fisher matrix formalism for calculating the sensitivity of such instruments to the 21 cm signal from reionization and the dark ages. We show that rapid reionization histories with duration {Delta}z < or approx. 2 can be constrained, provided that local foregrounds can be well modeled by low order polynomials. It is then shown that observations in the range {nu}=50-100 MHz can feasibly constrain the Ly{alpha} and x-ray emissivity of the first stars forming at z{approx}15-25, provided that systematic temperature residuals can be controlled to less than 1 mK. Finally, we demonstrate the difficulty of detecting the 21 cm signal from the dark ages before star formation.

  17. Intensity Mapping of Molecular Gas at High Redshift

    NASA Astrophysics Data System (ADS)

    Bower, Geoffrey; Keating, Garrett; Marrone, Dan; DeBoer, David; Chang, Tzu-Ching; Chen, Ming-Tang; Jiang, Homin; Koch, Patrick; Kubo, Derek; Li, Chao-Te; Lin, K. Y.; Srinivasan, Ranjani; Darling, Jeremy

    2015-08-01

    The origin and evolution of structure in the Universe is one of the major challenges of observational astronomy. How and when did the first stars and galaxies form? How does baryonic structure trace the underlying dark matter? A multi-wavelength, multi-tool approach is necessary to provide the complete story or the evolution of structure in the Universe. Intensity mapping, which relies on the ability to detect many objects at once through their integrated emission rather than direct detection of individual objects, is a critical part of this mosaic. Intensity mapping provides a window on lower luminosity objects that cannot be detected individually but that collectively drive important processes. In particular, our understanding of the molecular gas component of massive galaxies is being revolutionized by ALMA and EVLA but the population of smaller, star-forming galaxies, which provide the bulk of star formation cannot be individually probed by these instruments.In this talk, I will summarize two intensity mapping experiments to detect molecular gas through the carbon monoxide (CO) rotational transition. We are currently completing sensitive observations with the Sunyaev-Zel'dovic Array (SZA) telescope at a wavelength of 1 cm that are sensitive to emission at redshifts 2.3 to 3.3. The SZA experiments sets strong limits on models for the CO emission and demonstrates the ability to reject foregrounds and telescope systematics in very deep integrations. I also describe the development of an intensity mapping capability for the Y.T. Lee Array, a 13-element interferometer located on Mauna Loa. In its first phase, this project focuses on detection of CO at redshifts 2.3 - 3.3 with detection via power spectrum and cross-correlation with other surveys. The project includes a major technical upgrade, a new digital correlator and IF electronics component to be deployed in 2015/2016. The Y.T. Lee Array observations will be more sensitive and extend to larger angular scales

  18. The Evolution Of 21 cm Structure (EOS): public, large-scale simulations of Cosmic Dawn and reionization

    NASA Astrophysics Data System (ADS)

    Mesinger, Andrei; Greig, Bradley; Sobacchi, Emanuele

    2016-07-01

    We introduce the Evolution Of 21 cm Structure (EOS) project: providing periodic, public releases of the latest cosmological 21 cm simulations. 21 cm interferometry is set to revolutionize studies of the Cosmic Dawn (CD) and Epoch of Reionization (EoR). Progress will depend on sophisticated data analysis pipelines, initially tested on large-scale mock observations. Here we present the 2016 EOS release: 10243, 1.6 Gpc, 21 cm simulations of the CD and EoR, calibrated to the Planck 2015 measurements. We include calibrated, sub-grid prescriptions for inhomogeneous recombinations and photoheating suppression of star formation in small-mass galaxies. Leaving the efficiency of supernovae feedback as a free parameter, we present two runs which bracket the contribution from faint unseen galaxies. From these two extremes, we predict that the duration of reionization (defined as a change in the mean neutral fraction from 0.9 to 0.1) should be between 2.7 ≲ Δzre ≲ 5.7. The large-scale 21 cm power during the advanced EoR stages can be different by up to a factor of ˜10, depending on the model. This difference has a comparable contribution from (i) the typical bias of sources and (ii) a more efficient negative feedback in models with an extended EoR driven by faint galaxies. We also present detectability forecasts. With a 1000 h integration, Hydrogen Epoch of Reionization Array and (Square Kilometre Array phase 1) SKA1 should achieve a signal-to-noise of ˜few to hundreds throughout the EoR/CD. We caution that our ability to clean foregrounds determines the relative performance of narrow/deep versus wide/shallow surveys expected with SKA1. Our 21-cm power spectra, simulation outputs and visualizations are publicly available.

  19. Models of the Cosmological 21 cm Signal from the Epoch of Reionization Calibrated with Lyα and CMB Data

    NASA Astrophysics Data System (ADS)

    Kulkarni, Girish; Choudhury, Tirthankar Roy; Puchwein, Ewald; Haehnelt, Martin G.

    2016-08-01

    We present here 21 cm predictions from high dynamic range simulations for a range of reionization histories that have been tested against available Lyα and CMB data. We assess the observability of the predicted spatial 21 cm fluctuations by ongoing and upcoming experiments in the late stages of reionization in the limit in which the hydrogen spin temperature is significantly larger than the CMB temperature. Models consistent with the available Lyα data and CMB measurement of the Thomson optical depth predict typical values of 10-20 mK2 for the variance of the 21 cm brightness temperature at redshifts z = 7-10 at scales accessible to ongoing and upcoming experiments (k ≲ 1 cMpc-1h). This is within a factor of a few magnitude of the sensitivity claimed to have been already reached by ongoing experiments in the signal rms value. Our different models for the reionization history make markedly different predictions for the redshift evolution and thus frequency dependence of the 21 cm power spectrum and should be easily discernible by LOFAR (and later HERA and SKA1) at their design sensitivity. Our simulations have sufficient resolution to assess the effect of high-density Lyman limit systems that can self-shield against ionizing radiation and stay 21 cm bright even if the hydrogen in their surroundings is highly ionized. Our simulations predict that including the effect of the self-shielded gas in highly ionized regions reduces the large scale 21 cm power by about 30%.

  20. Possibility of precise measurement of the cosmological power spectrum with a dedicated survey of 21 cm emission after reionization.

    PubMed

    Loeb, Abraham; Wyithe, J Stuart B

    2008-04-25

    Measurements of the 21 cm line emission by residual cosmic hydrogen after reionization can be used to trace the power spectrum of density perturbations through a significant fraction of the observable volume of the Universe. We show that a dedicated 21 cm observatory could probe a number of independent modes that is 2 orders of magnitude larger than currently available, and enable a cosmic-variance limited detection of the signature of a neutrino mass approximately 0.05 eV. The evolution of the linear growth factor with redshift could also constrain exotic theories of gravity or dark energy to an unprecedented precision. PMID:18518181

  1. New 21 cm Power Spectrum Upper Limits From PAPER II: Constraints on IGM Properties at z = 7.7

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan; Ali, Zaki; Parsons, Aaron; Paper Team

    2015-01-01

    Using a simulation-based framework, we interpret the power spectrum measurements from PAPER of Ali et al. in the context of IGM physics at z = 7.7. A cold IGM will result in strong 21 cm absorption relative to the CMB and leads to a 21 cm fluctuation power spectrum that can exceed 3000 mK^2. The new PAPER measurements allow us to rule out extreme cold IGM models, placing a lower limit on the physical temperature of the IGM. We also compare this limit with a calculation for the predicted heating from the currently observed galaxy population at z = 8.

  2. Probing primordial non-Gaussianity: the 3D Bispectrum of Ly-α forest and the redshifted 21-cm signal from the post reionization epoch

    SciTech Connect

    Sarkar, Tapomoy Guha; Hazra, Dhiraj Kumar E-mail: dhiraj@apctp.org

    2013-04-01

    We explore possibility of using the three dimensional bispectra of the Ly-α forest and the redshifted 21-cm signal from the post-reionization epoch to constrain primordial non-Gaussianity. Both these fields map out the large scale distribution of neutral hydrogen and maybe treated as tracers of the underlying dark matter field. We first present the general formalism for the auto and cross bispectrum of two arbitrary three dimensional biased tracers and then apply it to the specific case. We have modeled the 3D Ly-α transmitted flux field as a continuous tracer sampled along 1D skewers which corresponds to quasars sight lines. For the post reionization 21-cm signal we have used a linear bias model. We use a Fisher matrix analysis to present the first prediction for bounds on f{sub NL} and the other bias parameters using the three dimensional 21-cm bispectrum and other cross bispectra. The bounds on f{sub NL} depend on the survey volume, and the various observational noises. We have considered a BOSS like Ly-α survey where the average number density of quasars n-bar = 10{sup −3}Mpc{sup −2} and the spectra are measured at a 2-σ level. For the 21-cm signal we have considered a 4000 hrs observation with a futuristic SKA like radio array. We find that bounds on f{sub NL} obtained in our analysis (6 ≤ Δf{sub NL} ≤ 65) is competitive with CMBR and galaxy surveys and may prove to be an important alternative approach towards constraining primordial physics using future data sets. Further, we have presented a hierarchy of power of the bispectrum-estimators towards detecting the f{sub NL}. Given the quality of the data sets, one may use this method to optimally choose the right estimator and thereby provide better constraints on f{sub NL}. We also find that by combining the various cross-bispectrum estimators it is possible to constrain f{sub NL} at a level Δf{sub NL} ∼ 4.7. For the equilateral and orthogonal template we obtain Δf{sub NL}{sup equ} ∼ 17 and

  3. Seismic Intensity Maps for North Anatolian Fault Zone (Turkey) using Local Felt Intensity and Strong Motion Datasets

    NASA Astrophysics Data System (ADS)

    Askan, A.

    2014-12-01

    Seismic intensity maps indicate the spatial distribution of ground shaking levels in the meizoseismal area affected from an earthquake. Intensity maps provide guidance for the rapid assessment of shaking intensity and consequently the physical damage involved with an earthquake. Local correlations between the instrumental ground motion parameters and shaking intensity values are used to prepare these maps. There are several correlations derived using data from different regions in the world. However, since local damage characteristics of the built environment affect the felt-intensity values directly, different felt-intensity values may be reported in two different regions subjected to ground motions with similar amplitude and frequency contents. Thus such relationships should be derived based on regional strong motion and intensity datasets. Despite the intense seismic activity, as of now there are no such local correlations for the North Anatolian Fault Zone. In this study, we use the recently-compiled Turkish strong motion dataset along with the corresponding felt intensity data from past earthquakes to derive local relationships between MMI and a selected ground motion parameter (PGA, PGV, and SA at selected periods). We provide two sets of predictive equations: first group expresses the intensity values as a function of a selected ground motion parameter while the second set is more refined involving the event magnitude, distance and site class terms as independent variables. We present intensity maps of selected past events against the observed maps. We conclude that regional data from seismic networks is crucial for preparing realistic maps for use disaster management purposes.

  4. Characterizing foreground for redshifted 21 cm radiation: 150 MHz Giant Metrewave Radio Telescope observations

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhik; Prasad, Jayanti; Bharadwaj, Somnath; Ali, Sk. Saiyad; Chengalur, Jayaram N.

    2012-11-01

    Foreground removal is a major challenge for detecting the redshifted 21 cm neutral hydrogen (H I) signal from the Epoch of Reionization. We have used 150 MHz Giant Metrewave Radio Telescope observations to characterize the statistical properties of the foregrounds in four different fields of view. The measured multifrequency angular power spectrum Cℓ(Δν) is found to have values in the range 104-2 × 104 mK2 across 700 ≤ ℓ ≤ 2 × 104 and Δν ≤ 2.5 MHz, which is consistent with model predictions where point sources are the most dominant foreground component. The measured Cℓ(Δν) does not show a smooth Δν dependence, which poses a severe difficulty for foreground removal using polynomial fitting. The observational data were used to assess point source subtraction. Considering the brightest source (˜1 Jy) in each field, we find that the residual artefacts are less than 1.5 per cent in the most sensitive field (FIELD I). Considering all the sources in the fields, we find that the bulk of the image is free of artefacts, the artefacts being localized to the vicinity of the brightest sources. We have used FIELD I, which has an rms noise of 1.3 mJy beam-1, to study the properties of the radio source population to a limiting flux of 9 mJy. The differential source count is well fitted with a single power law of slope -1.6. We find there is no evidence for flattening of the source counts towards lower flux densities which suggests that source population is dominated by the classical radio-loud active galactic nucleus. The diffuse Galactic emission is revealed after the point sources are subtracted out from FIELD I. We find Cℓ ∝ ℓ-2.34 for 253 ≤ ℓ ≤ 800 which is characteristic of the Galactic synchrotron radiation measured at higher frequencies and larger angular scales. We estimate the fluctuations in the Galactic synchrotron emission to be ℓ(ℓ+1)Cℓ/2π≃10 K at ℓ = 800 (θ > 10 arcmin). The measured Cℓ is dominated by

  5. Light-cone anisotropy in the 21 cm signal from the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Zawada, Karolina; Semelin, Benoît; Vonlanthen, Patrick; Baek, Sunghye; Revaz, Yves

    2014-04-01

    Using a suite of detailed numerical simulations, we estimate the level of anisotropy generated by the time evolution along the light cone of the 21 cm signal from the epoch of reionization. Our simulations include the physics necessary to model the signal during both the late emission regime and the early absorption regime, namely X-ray and Lyman band 3D radiative transfer in addition to the usual dynamics and ionizing UV transfer. The signal is analysed using correlation functions perpendicular and parallel to the line of sight. We reproduce general findings from previous theoretical studies: the overall amplitude of the correlations and the fact that the light-cone anisotropy is visible only on large scales (100 comoving Mpc). However, the detailed behaviour is different. We find that, at three different epochs, the amplitudes of the correlations along and perpendicular to the line of sight differ from each other, indicating anisotropy. We show that these three epochs are associated with three events of the global reionization history: the overlap of ionized bubbles, the onset of mild heating by X-rays in regions around the sources, and the onset of efficient Lyman α coupling in regions around the sources. We find that a 20 × 20 deg2 survey area may be necessary to mitigate sample variance when we use the directional correlation functions. On a 100 Mpc (comoving) scale, we show that the light-cone anisotropy dominates over the anisotropy generated by peculiar velocity gradients computed in the linear regime. By modelling instrumental noise and limited resolution, we find that the anisotropy should be easily detectable by the Square Kilometre Array, assuming perfect foreground removal, the limiting factor being a large enough survey size. In the case of the Low-Frequency Array for radio astronomy, it is likely that only one anisotropy episode (ionized bubble overlap) will fall in the observing frequency range. This episode will be detectable only if sample

  6. A FLUX SCALE FOR SOUTHERN HEMISPHERE 21 cm EPOCH OF REIONIZATION EXPERIMENTS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.; Ali, Zaki; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, Dave H. E.; Gugliucci, Nicole E.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2013-10-20

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  7. Empowering line intensity mapping to study early galaxies

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Ferrara, A.

    2016-09-01

    Line intensity mapping is a superb tool to study the collective radiation from early galaxies. However, the method is hampered by the presence of strong foregrounds, mostly produced by low-redshift interloping lines. We present here a general method to overcome this problem which is robust against foreground residual noise and based on the cross-correlation function ψαL(r) between diffuse line emission and Lyα emitters (LAE). We compute the diffuse line (Lyα is used as an example) emission from galaxies in a (800Mpc)3 box at z = 5.7 and 6.6. We divide the box in slices and populate them with 14000(5500) LAEs at z = 5.7(6.6), considering duty cycles from 10-3 to 1. Both the LAE number density and slice volume are consistent with the expected outcome of the Subaru HSC survey. We add gaussian random noise with variance σN up to 100 times the variance of the Lyα emission, σα, to simulate residual foregrounds and compute ψαL(r). We find that the signal-to-noise of the observed ψαL(r) does not change significantly if σN ≤ 10σα and show that in these conditions the mean line intensity, ILyα, can be precisely recovered independently of the LAE duty cycle. Even if σN = 100σα, Iα can be constrained within a factor 2. The method works equally well for any other line (e.g. [CII], HeII) used for the intensity mapping experiment.

  8. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  9. On Removing Interloper Contamination from Intensity Mapping Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Taylor, Jessie

    2016-07-01

    Line intensity mapping experiments seek to trace large-scale structures by measuring the spatial fluctuations in the combined emission, in some convenient spectral line, from individually unresolved galaxies. An important systematic concern for these surveys is line confusion from foreground or background galaxies emitting in other lines that happen to lie at the same observed frequency as the “target” emission line of interest. We develop an approach to separate this “interloper” emission at the power spectrum level. If one adopts the redshift of the target emission line in mapping from observed frequency and angle on the sky to co-moving units, the interloper emission is mapped to the wrong co-moving coordinates. Because the mapping is different in the line of sight and transverse directions, the interloper contribution to the power spectrum becomes anisotropic, especially if the interloper and target emission are at widely separated redshifts. This distortion is analogous to the Alcock–Paczynski test, but here the warping arises from assuming the wrong redshift rather than an incorrect cosmological model. We apply this to the case of a hypothetical [C ii] emission survey at z∼ 7 and find that the distinctive interloper anisotropy can, in principle, be used to separate strong foreground CO emission fluctuations. In our models, however, a significantly more sensitive instrument than currently planned is required, although there are large uncertainties in forecasting the high-redshift [C ii] emission signal. With upcoming surveys, it may nevertheless be useful to apply this approach after first masking pixels suspected of containing strong interloper contamination.

  10. On Removing Interloper Contamination from Intensity Mapping Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Taylor, Jessie

    2016-07-01

    Line intensity mapping experiments seek to trace large-scale structures by measuring the spatial fluctuations in the combined emission, in some convenient spectral line, from individually unresolved galaxies. An important systematic concern for these surveys is line confusion from foreground or background galaxies emitting in other lines that happen to lie at the same observed frequency as the “target” emission line of interest. We develop an approach to separate this “interloper” emission at the power spectrum level. If one adopts the redshift of the target emission line in mapping from observed frequency and angle on the sky to co-moving units, the interloper emission is mapped to the wrong co-moving coordinates. Because the mapping is different in the line of sight and transverse directions, the interloper contribution to the power spectrum becomes anisotropic, especially if the interloper and target emission are at widely separated redshifts. This distortion is analogous to the Alcock–Paczynski test, but here the warping arises from assuming the wrong redshift rather than an incorrect cosmological model. We apply this to the case of a hypothetical [C ii] emission survey at z˜ 7 and find that the distinctive interloper anisotropy can, in principle, be used to separate strong foreground CO emission fluctuations. In our models, however, a significantly more sensitive instrument than currently planned is required, although there are large uncertainties in forecasting the high-redshift [C ii] emission signal. With upcoming surveys, it may nevertheless be useful to apply this approach after first masking pixels suspected of containing strong interloper contamination.

  11. The Impact of Peculiar Velocity and Reionization Patchiness on 21cm Cosmology from the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Shapiro, P. R.; Iliev, I. T.; Mellema, G.; Ahn, K.; Datta, K.

    2012-01-01

    Neutral hydrogen atoms in the intergalactic medium at high redshift contribute a diffuse background of redshifted 21cm radiation which encodes information about the physical conditions in the early universe at z>6 during and before the epoch of reionization (EOR). Tomography of this 21cm background has emerged as a promising cosmological probe. The assumption that cosmological information in the 21cm signal can be separated from astrophysical information (i.e. that fluctuations in the total matter density can be measured separately from the dependence on patchy reionization and spin temperature) is based on linear perturbation theory and the anisotropy introduced by peculiar velocity. While it is true that fluctuations in the matter density at such high redshift are likely to be of linear amplitude on the large scales which correspond to the beam- and bandwidths of upcoming experiments, the nonlinearity of smaller scale structure in density, velocity and reionization patchiness can leave its imprint on the signal, which might then spoil the linear separation scheme. We have built a robust and efficient computational scheme to predict the 21cm background in observer redshift space, given real-space simulation data, which accounts for peculiar velocity in every detail. We apply this to the results of new state-of-the-art large-scale reionization simulations which combine large-box, high-resolution N-body simulations of the LCDM universe (with up to 165 billion particles in comoving boxes up to 607 Mpc on a side in present units) with radiative transfer simulations of reionization, to test the validity of using 21cm background measurements for cosmology and characterize the predicted signal for upcoming radio surveys. This work was supported in part by NSF grants AST-0708176 and AST-1009799, NASA grants NNX07AH09G, NNG04G177G and NNX11AE09G, and Chandra grant SAO TM8-9009X.

  12. Observational challenges in Lyα intensity mapping

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Yue, B.; Ferrara, A.

    2016-09-01

    Intensity mapping (IM) is sensitive to the cumulative line emission of galaxies. As such it represents a promising technique for statistical studies of galaxies fainter than the limiting magnitude of traditional galaxy surveys. The strong hydrogen Lyα line is the primary target for such an experiment, as its intensity is linked to star formation activity and the physical state of the interstellar (ISM) and intergalactic (IGM) medium. However, to extract the meaningful information one has to solve the confusion problems caused by interloping lines from foreground galaxies. We discuss here the challenges for a Lyα IM experiment targeting z > 4 sources. We find that the Lyα power spectrum can be in principle easily (marginally) obtained with a 40 cm space telescope in a few days of observing time up to z ≲ 8 (z ˜ 10) assuming that the interloping lines (e.g. Hα, [O II], [O III] lines) can be efficiently removed. We show that interlopers can be removed by using an ancillary photometric galaxy survey with limiting AB mag ˜26 in the NIR bands (Y, J, H, or K). This would enable detection of the Lyα signal from 5 < z < 9 faint sources. However, if a [C II] IM experiment is feasible, by cross-correlating the Lyα with the [C II] signal the required depth of the galaxy survey can be decreased to AB mag ˜24. This would bring the detection at reach of future facilities working in close synergy.

  13. X-rays and hard ultraviolet radiation from the first galaxies: ionization bubbles and 21-cm observations

    NASA Astrophysics Data System (ADS)

    Venkatesan, Aparna; Benson, Andrew

    2011-11-01

    The first stars and quasars are known sources of hard ionizing radiation in the first billion years of the Universe. We examine the joint effects of X-rays and hard ultraviolet (UV) radiation from such first-light sources on the hydrogen and helium reionization of the intergalactic medium (IGM) at early times, and the associated heating. We study the growth and evolution of individual H II, He II and He III regions around early galaxies with first stars and/or quasi-stellar object populations. We find that in the presence of helium-ionizing radiation, X-rays may not dominate the ionization and thermal history of the IGM at z˜ 10-20, contributing relatively modest increases to IGM ionization and heating up to ˜103-105 K in IGM temperatures. We also calculate the 21-cm signal expected from a number of scenarios with metal-free starbursts and quasars in varying combinations and masses at these redshifts. The peak values for the spin temperature reach ˜104-105 K in such cases. The maximum values for the 21-cm brightness temperature are around 30-40 mK in emission, while the net values of the 21-cm absorption signal range from ˜a few to 60 mK on scales of 0.01-1 Mpc. We find that the 21-cm signature of X-ray versus UV ionization could be distinct, with the emission signal expected from X-rays alone occurring at smaller scales than that from UV radiation, resulting from the inherently different spatial scales at which X-ray and UV ionization/heating manifests. This difference is time-dependent and becomes harder to distinguish with an increasing X-ray contribution to the total ionizing photon production. Such differing scale-dependent contributions from X-ray and UV photons may therefore 'blur' the 21-cm signature of the percolation of ionized bubbles around early haloes (depending on whether a cosmic X-ray or UV background is built up first) and affect the interpretation of 21-cm data constraints on reionization.

  14. Hydrogen and the First Stars: First Results from the SCI-HI 21-cm all-sky spectrum experiment

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; Peterson, Jeffrey; Lopez-Cruz, Omar; Jauregui-Garcia, Jose-Miguel; SCI-HI Experiment Team

    2015-01-01

    The 'Sonda Cosmologica de las Islas para la Deteccion de Hidrogeno Neutro' (SCI-HI) experiment is an all-sky 21-cm brightness temperature spectrum experiment studying the cosmic dawn (z~15-35). The experiment is a collaboration between Carnegie Mellon University (CMU) and Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) in Mexico. Initial deployment of the SCI-HI experiment occurred in June 2013 on Guadalupe; a small island about 250 km off of the Pacific coast of Baja California in Mexico. Preliminary measurements from this deployment have placed the first observational constraints on the 21-cm all-sky spectrum around 70 MHz (z~20), see Voytek et al (2014).Neutral Hydrogen (HI) is found throughout the universe in the cold gas that makes up the intergalactic medium (IGM). HI can be observed through the spectral line at 21 cm (1.4 GHz) due to hyperfine structure. Expansion of the universe causes the wavelength of this spectral line to stretch at a rate defined by the redshift z, leading to a signal which can be followed through time.Now the strength of the 21-cm signal in the IGM is dependent only on a small number of variables; the temperature and density of the IGM, the amount of HI in the IGM, the UV energy density in the IGM, and the redshift. This means that 21-cm measurements teach us about the history and structure of the IGM. The SCI-HI experiment focuses on the spatially averaged 21-cm spectrum, looking at the temporal evolution of the IGM during the cosmic dawn before reionization.Although the SCI-HI experiment placed first constraints with preliminary data, this data was limited to a narrow frequency regime around 60-85 MHz. This limitation was caused by instrumental difficulties and the presence of residual radio frequency interference (RFI) in the FM radio band (~88-108 MHz). The SCI-HI experiment is currently undergoing improvements and we plan to have another deployment soon. This deployment would be to Socorro and Clarion, two

  15. Comparison of the thermal and nonthermal radiation characteristics of Jupiter at 6, 11, and 21 cm with model calculations

    NASA Technical Reports Server (NTRS)

    De Pater, I.; Kenderdine, S.; Dickel, J. R.

    1982-01-01

    Four different data sets on Jupiter, one at 6, one at 11, and two at 21 cm, are compared to each other and with the synchrotron radiation model of the magnetosphere developed by de Pater (1981). The model agrees with all these data sets, and hence was used to derive and interpret the characteristics of the thermal radiation component at all three wavelengths. The disk temperatures are 233 + or - 17, 280 + or - 20, and 340 + or - 26 K at 6, 11, and 21 cm, respectively. A comparison of the data with atmospheric model calculations strongly suggests that the disk is uniform at 6 and 11 cm near the center of the disk, where mu is greater than 0.6-0.7. This may indicate a nonuniform distribution of ammonia at layers at and above the visible cloud layers.

  16. A record breaking sightline: Five DLA-strength 21 cm absorbers towards the quasar MG J0414+0534

    NASA Astrophysics Data System (ADS)

    Tanna, Anant; Whiting, Matthew; Curran, Steve

    2013-10-01

    High redshift absorption of the HI 21 cm transition is a powerful probe of star-forming gas and hence evolution of structure in the Universe at large lookback times. Typically a rare occurrence, we have detected an unprecedented number of 21 cm absorbers along a single sightline to the red QSO J0414+0534, suggesting a population of galaxies missed by optical surveys. Extreme RFI in the spectrum of the strongest absorber requires ATCA observations to fully parameterise the system and understand the nature of the absorbing gas. We aim to confirm whether this highly unique sight-line truly does have so many dense absorbers, and use these features toward calculating the cosmic acceleration.

  17. Factor analysis as a tool for spectral line component separation 21cm emission in the direction of L1780

    NASA Technical Reports Server (NTRS)

    Toth, L. V.; Mattila, K.; Haikala, L.; Balazs, L. G.

    1992-01-01

    The spectra of the 21cm HI radiation from the direction of L1780, a small high-galactic latitude dark/molecular cloud, were analyzed by multivariate methods. Factor analysis was performed on HI (21cm) spectra in order to separate the different components responsible for the spectral features. The rotated, orthogonal factors explain the spectra as a sum of radiation from the background (an extended HI emission layer), and from the L1780 dark cloud. The coefficients of the cloud-indicator factors were used to locate the HI 'halo' of the molecular cloud. Our statistically derived 'background' and 'cloud' spectral profiles, as well as the spatial distribution of the HI halo emission distribution were compared to the results of a previous study which used conventional methods analyzing nearly the same data set.

  18. Measuring the 21 cm Power Spectrum from the Epoch of Reionization with the Giant Metrewave Radio Telescope

    NASA Astrophysics Data System (ADS)

    Paciga, Gregory

    The Epoch of Reionization (EoR) is the transitional period in the universe's evolution which starts when the first luminous sources begin to ionize the intergalactic medium for the first time since recombination, and ends when the most of the hydrogen is ionized by about a redshift of 6. Observations of the 21cm emission from hyperfine splitting of the hydrogen atom can carry a wealth of cosmological information from this epoch since the redshifted line can probe the entire volume. The GMRT-EoR experiment is an ongoing effort to make a statistical detection of the power spectrum of 21cm neutral hydrogen emission due to the patchwork of neutral and ionized regions present during the transition. In this work we detail approximately five years of observations at the GMRT, comprising over 900 hours, and an in-depth analysis of about 50 hours which have lead to the first upper limits on the 21cm power spectrum in the range z = 8.1 to 9.2. This includes a concentrated radio frequency interference (RFI) mitigation campaign around the GMRT area, a novel method for removing broadband RFI with a singular value decomposition, and calibration with a pulsar as both a phase and polarization calibrator. Preliminary results from 2011 showed a 2-sigma upper limit to the power spectrum of (70 mK). 2. However, we find that foreground removalstrategies tend to reduce the cosmological signal significantly, and modeling this signal loss is crucial for interpretation of power spectrum measurements. Using a simulated signal to estimate the transfer function of the real 21cm signal through the foreground removal procedure, we are able to find the optimal level of foreground removal and correct for the signal loss. Using this correction, we report a 2-sigma upper limit of (248 mK)2 at k = 0.5 h Mpc-1.

  19. GIANT METREWAVE RADIO TELESCOPE DETECTION OF TWO NEW H I 21 cm ABSORBERS AT z ≈ 2

    SciTech Connect

    Kanekar, N.

    2014-12-20

    I report the detection of H I 21 cm absorption in two high column density damped Lyα absorbers (DLAs) at z ≈ 2 using new wide-band 250-500 MHz receivers on board the Giant Metrewave Radio Telescope. The integrated H I 21 cm optical depths are 0.85 ± 0.16 km s{sup –1} (TXS1755+578) and 2.95 ± 0.15 km s{sup –1} (TXS1850+402). For the z = 1.9698 DLA toward TXS1755+578, the difference in H I 21 cm and C I profiles and the weakness of the radio core suggest that the H I 21cm absorption arises toward radio components in the jet, and that the optical and radio sightlines are not the same. This precludes an estimate of the DLA spin temperature. For the z = 1.9888 DLA toward TXS1850+402, the absorber covering factor is likely to be close to unity, as the background source is extremely compact, with the entire 5 GHz emission arising from a region of ≤ 1.4 mas in size. This yields a DLA spin temperature of T{sub s} = (372 ± 18) × (f/1.0) K, lower than typical T{sub s} values in high-z DLAs. This low spin temperature and the relatively high metallicity of the z = 1.9888 DLA ([Zn/H] =(– 0.68 ± 0.04)) are consistent with the anti-correlation between metallicity and spin temperature that has been found earlier in damped Lyα systems.

  20. LOFAR insights into the epoch of reionization from the cross-power spectrum of 21 cm emission and galaxies

    NASA Astrophysics Data System (ADS)

    Wiersma, R. P. C.; Ciardi, B.; Thomas, R. M.; Harker, G. J. A.; Zaroubi, S.; Bernardi, G.; Brentjens, M.; de Bruyn, A. G.; Daiboo, S.; Jelic, V.; Kazemi, S.; Koopmans, L. V. E.; Labropoulos, P.; Martinez, O.; Mellema, G.; Offringa, A.; Pandey, V. N.; Schaye, J.; Veligatla, V.; Vedantham, H.; Yatawatta, S.

    2013-07-01

    Using a combination of N-body simulations, semi-analytic models and radiative transfer calculations, we have estimated the theoretical cross-power spectrum between galaxies and the 21 cm emission from neutral hydrogen during the epoch of reionization. In accordance with previous studies, we find that the 21 cm emission is initially correlated with haloes on large scales (≳30 Mpc), anticorrelated on intermediate (˜5 Mpc) and uncorrelated on small (≲3 Mpc) scales. This picture quickly changes as reionization proceeds and the two fields become anticorrelated on large scales. The normalization of the cross-power spectrum can be used to set constraints on the average neutral fraction in the intergalactic medium and its shape can be a powerful tool to study the topology of reionization. When we apply a drop-out technique to select galaxies and add to the 21 cm signal the noise expected from the LOw Frequency ARray (LOFAR) telescope, we find that while the normalization of the cross-power spectrum remains a useful tool for probing reionization, its shape becomes too noisy to be informative. On the other hand, for an Lyα Emitter (LAE) survey both the normalization and the shape of the cross-power spectrum are suitable probes of reionization. A closer look at a specific planned LAE observing program using Subaru Hyper-Suprime Cam reveals concerns about the strength of the 21 cm signal at the planned redshifts. If the ionized fraction at z ˜ 7 is lower than the one estimated here, then using the cross-power spectrum may be a useful exercise given that at higher redshifts and neutral fractions it is able to distinguish between two toy models with different topologies.

  1. Large scale maps of cropping intensity in Asia from MODIS

    NASA Astrophysics Data System (ADS)

    Gray, J. M.; Friedl, M. A.; Frolking, S. E.; Ramankutty, N.; Nelson, A.

    2013-12-01

    for linear regressions estimated for local windows, and constrained by the EVI amplitude and length of crop cycles that are identified. The procedure can be used to map seasonal or long-term average cropping strategies, and to characterize changes in cropping intensity over longer time periods. The datasets produced using this method therefore provide information related to global cropping systems, and more broadly, provide important information that is required to ensure sustainable management of Earth's resources and ensure food security. To test our algorithm, we applied it to time series of MODIS EVI images over Asia from 2000-2012. Our results demonstrate the utility of multi-temporal remote sensing for characterizing multi-cropping practices in some of the most important and intensely agricultural regions in the world. To evaluate our approach, we compared results from MODIS to field-scale survey data at the pixel scale, and agricultural inventory statistics at sub-national scales. We then mapped changes in multi-cropped area in Asia from the early MODIS period (2001-2004) to present (2009-2012), and characterizes the magnitude and location of changes in cropping intensity over the last 12 years. We conclude with a discussion of the challenges, future improvements, and broader impacts of this work.

  2. e-MERLIN 21cm constraints on the mass-loss rates of OB stars in Cyg OB2

    NASA Astrophysics Data System (ADS)

    Morford, J. C.; Fenech, D. M.; Prinja, R. K.; Blomme, R.; Yates, J. A.

    2016-08-01

    We present e-MERLIN 21 cm (L-band) observations of single luminous OB stars in the Cygnus OB2 association, from the COBRaS Legacy programme. The radio observations potentially offer the most straightforward, least model-dependent, determinations of mass-loss rates, and can be used to help resolve current discrepancies in mass-loss rates via clumped and structured hot star winds. We report here that the 21 cm flux densities of O3 to O6 supergiant and giant stars are less than ˜ 70 μJy. These fluxes may be translated to `smooth' wind mass-loss upper limits of ˜ 4.4 - 4.8 × 10-6 M⊙ yr -1 for O3 supergiants and ≲ 2.9 × 10-6 M⊙ yr -1 for B0 to B1 supergiants. The first ever resolved 21 cm detections of the hypergiant (and LBV candidate) Cyg OB2 #12 are discussed; for multiple observations separated by 14 days, we detect a ˜ 69% increase in its flux density. Our constraints on the upper limits for the mass-loss rates of evolved OB stars in Cyg OB2 support the model that the inner wind region close to the stellar surface (where Hα forms) is more clumped than the very extended geometric region sampled by our radio observations.

  3. Precise Measurement of the Reionization Optical Depth from the Global 21 cm Signal Accounting for Cosmic Heating

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Loeb, Abraham

    2016-04-01

    As a result of our limited data on reionization, the total optical depth for electron scattering, τ, limits precision measurements of cosmological parameters from the Cosmic Microwave Background (CMB). It was recently shown that the predicted 21 cm signal of neutral hydrogen contains enough information to reconstruct τ with sub-percent accuracy, assuming that the neutral gas was much hotter than the CMB throughout the entire epoch of reionization (EoR). Here we relax this assumption and use the global 21 cm signal alone to extract τ for realistic X-ray heating scenarios. We test our model-independent approach using mock data for a wide range of ionization and heating histories and show that an accurate measurement of the reionization optical depth at a sub-percent level is possible in most of the considered scenarios even when heating is not saturated during the EoR, assuming that the foregrounds are mitigated. However, we find that in cases where heating sources had hard X-ray spectra and their luminosity was close to or lower than what is predicted based on low-redshift observations, the global 21 cm signal alone is not a good tracer of the reionization history.

  4. DEEP 21 cm H I OBSERVATIONS AT z {approx} 0.1: THE PRECURSOR TO THE ARECIBO ULTRA DEEP SURVEY

    SciTech Connect

    Freudling, Wolfram; Zwaan, Martin; Staveley-Smith, Lister; Meyer, Martin; Catinella, Barbara; Minchin, Robert; Calabretta, Mark; Momjian, Emmanuel; O'Neil, Karen

    2011-01-20

    The 'ALFA Ultra Deep Survey' (AUDS) is an ongoing 21 cm spectral survey with the Arecibo 305 m telescope. AUDS will be the most sensitive blind survey undertaken with Arecibo's 300 MHz Mock spectrometer. The survey searches for 21 cm H I line emission at redshifts between 0 and 0.16. The main goals of the survey are to investigate the H I content and probe the evolution of H I gas within that redshift region. In this paper, we report on a set of precursor observations with a total integration time of 53 hr. The survey detected a total of eighteen 21 cm emission lines at redshifts between 0.07 and 0.15 in a region centered around {alpha}{sub 2000} {approx} 0{sup h}, {delta} {approx} 15{sup 0}42'. The rate of detection is consistent with the one expected from the local H I mass function. The derived relative H I density at the median redshift of the survey is {rho}{sub H{sub I}}[z = 0.125] = (1.0 {+-} 0.3){rho}{sub 0}, where {rho}{sub 0} is the H I density at zero redshift.

  5. Simulations for single-dish intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Bigot-Sazy, M.-A.; Dickinson, C.; Battye, R. A.; Browne, I. W. A.; Ma, Y.-Z.; Maffei, B.; Noviello, F.; Remazeilles, M.; Wilkinson, P. N.

    2015-12-01

    H I intensity mapping is an emerging tool to probe dark energy. Observations of the redshifted H I signal will be contaminated by instrumental noise, atmospheric and Galactic foregrounds. The latter is expected to be four orders of magnitude brighter than the H I emission we wish to detect. We present a simulation of single-dish observations including an instrumental noise model with 1/f and white noise, and sky emission with a diffuse Galactic foreground and H I emission. We consider two foreground cleaning methods: spectral parametric fitting and principal component analysis. For a smooth frequency spectrum of the foreground and instrumental effects, we find that the parametric fitting method provides residuals that are still contaminated by foreground and 1/f noise, but the principal component analysis can remove this contamination down to the thermal noise level. This method is robust for a range of different models of foreground and noise, and so constitutes a promising way to recover the H I signal from the data. However, it induces a leakage of the cosmological signal into the subtracted foreground of around 5 per cent. The efficiency of the component separation methods depends heavily on the smoothness of the frequency spectrum of the foreground and the 1/f noise. We find that as long as the spectral variations over the band are slow compared to the channel width, the foreground cleaning method still works.

  6. From Darkness to Light: Observing the First Stars and Galaxies with the Redshifted 21-cm Line using the Dark Ages Radio Explorer

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Lazio, Joseph; Bowman, Judd D.; Bradley, Richard F.; Datta, Abhirup; Furlanetto, Steven; Jones, Dayton L.; Kasper, Justin; Loeb, Abraham; Harker, Geraint

    2015-01-01

    The Dark Ages Radio Explorer (DARE) will reveal when the first stars, black holes, and galaxies formed in the early Universe and will define their characteristics, from the Dark Ages (z=35) to the Cosmic Dawn (z=11). This epoch of the Universe has never been directly observed. The DARE science instrument is composed of electrically-short bi-conical dipole antennas, a correlation receiver, and a digital spectrometer that measures the sky-averaged, low frequency (40-120 MHz) spectral features from the highly redshifted 21-cm HI line that surrounds the first objects. These observations are possible because DARE will orbit the Moon at an altitude of 125 km and takes data when it is above the radio-quiet, ionosphere-free, solar-shielded lunar farside. DARE executes the small-scale mission described in the NASA Astrophysics Roadmap (p. 83): 'mapping the Universe's hydrogen clouds using 21-cm radio wavelengths via lunar orbiter from the farside of the Moon'. This mission will address four key science questions: (1) When did the first stars form and what were their characteristics? (2) When did the first accreting black holes form and what was their characteristic mass? (3) When did reionization begin? (4) What surprises emerged from the Dark Ages (e.g., Dark Matter decay). DARE uniquely complements other major telescopes including Planck, JWST, and ALMA by bridging the gap between the smooth Universe seen via the CMB and rich web of galaxy structures seen with optical/IR/mm telescopes. Support for the development of this mission concept was provided by the Office of the Director, NASA Ames Research Center and by JPL/Caltech.

  7. Quick seismic intensity map investigation and evaluation based on cloud monitoring method using smart mobile phone

    NASA Astrophysics Data System (ADS)

    Zhao, Xuefeng; Peng, Deli; Hu, Weitong; Guan, Quanhua; Yu, Yan; Li, Mingchu; Ou, Jinping

    2015-04-01

    Seismic intensity map which reflects the actual situation of destruction in a certain area after the earthquake, and it is of great significance in guiding relief work and assessing damage loss. Based on cloud monitoring method proposed, we developed software, which can quickly investigate the seismic intensity distribution and draw the intensity map after the earthquake using the big data collected by individual smart phone questionnaire in earthquake zone. According to seismic attenuation law, we generated some seismic intensity values to test our system and successfully drawn out of the seismic intensity map.

  8. The TMS Map Scales with Increased Stimulation Intensity and Muscle Activation.

    PubMed

    van de Ruit, Mark; Grey, Michael J

    2016-01-01

    One way to study cortical organisation, or its reorganisation, is to use transcranial magnetic stimulation (TMS) to construct a map of corticospinal excitability. TMS maps are reported to be acquired with a wide variety of stimulation intensities and levels of muscle activation. Whilst MEPs are known to increase both with stimulation intensity and muscle activation, it remains to be established what the effect of these factors is on the map's centre of gravity (COG), area, volume and shape. Therefore, the objective of this study was to systematically examine the effect of stimulation intensity and muscle activation on these four key map outcome measures. In a first experiment, maps were acquired with a stimulation intensity of 110, 120 and 130% of resting threshold. In a second experiment, maps were acquired at rest and at 5, 10, 20 and 40% of maximum voluntary contraction. Map area and map volume increased with both stimulation intensity (P < 0.01) and muscle activation (P < 0.01). Neither the COG nor the map shape changed with either stimulation intensity or muscle activation (P > 0.09 in all cases). This result indicates the map simply scales with stimulation intensity and muscle activation. PMID:26337508

  9. Constraints on the temperature of the intergalactic medium at z = 8.4 with 21-cm observations

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei; Pober, Jonathan C.

    2016-02-01

    We compute robust lower limits on the spin temperature, TS, of the z = 8.4 intergalactic medium (IGM), implied by the upper limits on the 21-cm power spectrum recently measured by PAPER-64. Unlike previous studies which used a single epoch of reionization (EoR) model, our approach samples a large parameter space of EoR models: the dominant uncertainty when estimating constraints on TS. Allowing TS to be a free parameter and marginalizing over EoR parameters in our Markov Chain Monte Carlo code 21CMMC, we infer TS ≥ 3 K (corresponding approximately to 1σ) for a mean IGM neutral fraction of bar{x}_{HI}≳ 0.1. We further improve on these limits by folding-in additional EoR constraints based on: (i) the dark fraction in QSO spectra, which implies a strict upper limit of bar{x}_{HI}[z=5.9]≤ 0.06+0.05 (1σ ); and (ii) the electron scattering optical depth, τe = 0.066 ± 0.016 (1σ) measured by the Planck satellite. By restricting the allowed EoR models, these additional observations tighten the approximate 1σ lower limits on the spin temperature to TS ≥ 6 K. Thus, even such preliminary 21-cm observations begin to rule out extreme scenarios such as `cold reionization', implying at least some prior heating of the IGM. The analysis framework developed here can be applied to upcoming 21-cm observations, thereby providing unique insights into the sources which heated and subsequently reionized the very early Universe.

  10. Interpreting the Global 21-cm Signal from High Redshifts. II. Parameter Estimation for Models of Galaxy Formation

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan; Harker, Geraint J. A.; Burns, Jack O.

    2015-11-01

    Following our previous work, which related generic features in the sky-averaged (global) 21-cm signal to properties of the intergalactic medium, we now investigate the prospects for constraining a simple galaxy formation model with current and near-future experiments. Markov-Chain Monte Carlo fits to our synthetic data set, which includes a realistic galactic foreground, a plausible model for the signal, and noise consistent with 100 hr of integration by an ideal instrument, suggest that a simple four-parameter model that links the production rate of Lyα, Lyman-continuum, and X-ray photons to the growth rate of dark matter halos can be well-constrained (to ˜0.1 dex in each dimension) so long as all three spectral features expected to occur between 40 ≲ ν/MHz ≲ 120 are detected. Several important conclusions follow naturally from this basic numerical result, namely that measurements of the global 21-cm signal can in principle (i) identify the characteristic halo mass threshold for star formation at all redshifts z ≳ 15, (ii) extend z ≲ 4 upper limits on the normalization of the X-ray luminosity star formation rate (LX-SFR) relation out to z ˜ 20, and (iii) provide joint constraints on stellar spectra and the escape fraction of ionizing radiation at z ˜ 12. Though our approach is general, the importance of a broadband measurement renders our findings most relevant to the proposed Dark Ages Radio Explorer, which will have a clean view of the global 21-cm signal from ˜40 to 120 MHz from its vantage point above the radio-quiet, ionosphere-free lunar far-side.

  11. A SENSITIVITY AND ARRAY-CONFIGURATION STUDY FOR MEASURING THE POWER SPECTRUM OF 21 cm EMISSION FROM REIONIZATION

    SciTech Connect

    Parsons, Aaron; Pober, Jonathan; McQuinn, Matthew; Jacobs, Daniel; Aguirre, James

    2012-07-01

    Telescopes aiming to measure 21 cm emission from the Epoch of Reionization must toe a careful line, balancing the need for raw sensitivity against the stringent calibration requirements for removing bright foregrounds. It is unclear what the optimal design is for achieving both of these goals. Via a pedagogical derivation of an interferometer's response to the power spectrum of 21 cm reionization fluctuations, we show that even under optimistic scenarios first-generation arrays will yield low-signal-to-noise detections, and that different compact array configurations can substantially alter sensitivity. We explore the sensitivity gains of array configurations that yield high redundancy in the uv-plane-configurations that have been largely ignored since the advent of self-calibration for high-dynamic-range imaging. We first introduce a mathematical framework to generate optimal minimum-redundancy configurations for imaging. We contrast the sensitivity of such configurations with high-redundancy configurations, finding that high-redundancy configurations can improve power-spectrum sensitivity by more than an order of magnitude. We explore how high-redundancy array configurations can be tuned to various angular scales, enabling array sensitivity to be directed away from regions of the uv-plane (such as the origin) where foregrounds are brighter and instrumental systematics are more problematic. We demonstrate that a 132 antenna deployment of the Precision Array for Probing the Epoch of Reionization observing for 120 days in a high-redundancy configuration will, under ideal conditions, have the requisite sensitivity to detect the power spectrum of the 21 cm signal from reionization at a 3{sigma} level at k < 0.25 h Mpc{sup -1} in a bin of {Delta}ln k = 1. We discuss the tradeoffs of low- versus high-redundancy configurations.

  12. Invisible Active Galactic Nuclei. II. Radio Morphologies and Five New H i 21cm Absorption Line Detectors

    NASA Astrophysics Data System (ADS)

    Yan, Ting; Stocke, John T.; Darling, Jeremy; Momjian, Emmanuel; Sharma, Soniya; Kanekar, Nissim

    2016-03-01

    This is the second paper directed toward finding new highly redshifted atomic and molecular absorption lines at radio frequencies. To this end, we selected a sample of 80 candidates for obscured radio-loud active galactic nuclei (AGNs) and presented their basic optical/near-infrared (NIR) properties in Paper I. In this paper, we present both high-resolution radio continuum images for all of these sources and H i 21 cm absorption spectroscopy for a few selected sources in this sample. A-configuration 4.9 and 8.5 GHz Very Large Array continuum observations find that 52 sources are compact or have substantial compact components with size <0.″5 and flux densities >0.1 Jy at 4.9 GHz. The 36 most compact sources were then observed with the Very Long Baseline Array at 1.4 GHz. One definite and 10 candidate Compact Symmetric Objects (CSOs) are newly identified, which is a detection rate of CSOs ∼three times higher than the detection rate previously found in purely flux-limited samples. Based on possessing compact components with high flux densities, 60 of these sources are good candidates for absorption-line searches. Twenty-seven sources were observed for H i 21 cm absorption at their photometric or spectroscopic redshifts with only six detections (five definite and one tentative). However, five of these were from a small subset of six CSOs with pure galaxy optical/NIR spectra (i.e., any AGN emission is obscured) and for which accurate spectroscopic redshifts place the redshifted 21 cm line in a radio frequency intereference (RFI)-free spectral “window” (i.e., the percentage of H i 21 cm absorption-line detections could be as high as ∼90% in this sample). It is likely that the presence of ubiquitous RFI and the absence of accurate spectroscopic redshifts preclude H i detections in similar sources (only 1 detection out of the remaining 22 sources observed, 13 of which have only photometric redshifts); that is, H i absorption may well be present but is masked by

  13. Multi-redshift limits on the 21cm power spectrum from PAPER 64: XRays in the early universe

    NASA Astrophysics Data System (ADS)

    Kolopanis, Matthew; Jacobs, Danny; PAPER Collaboration

    2016-06-01

    Here we present new constraints on 21cm emission from cosmic reionization from the 64 element deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). These results extend the single redshift 8.4 result presented in Ali et al 2015 to include redshifts from 7.3 to 10.9. These new limits offer as much as a factor of 4 improvement in sensitivity compared to previous 32 element PAPER results by Jacobs et al (2015). Using these limits we place constraints on a parameterized model of heating due to XRays emitted by early collapsed objects.

  14. A 21-cm line study of NGC 5963, an SC galaxy with a low-surface brightness disk

    NASA Astrophysics Data System (ADS)

    Bosma, A.; Athanassoula, E.; van der Hulst, J. M.

    1988-06-01

    Results are presented from a detailed 21-cm line study of the Sc galaxy NGC 5963. The extent of the H I emission is found to be roughly coincident with the optical image, the latter being of much lower surface brightness than normal for Sc galaxies. The velocity field shows little deviation from axial symmetry, and the derived rotation curve is typical for Sc galaxies about twice as bright as NGC 5963. A composite mass model is presented using the observed light distribution to calculate a rotation curve for the luminous part of the galaxy (assuming a constant M/L-ratio with radius); this calculated rotation curve is compared to the observed one to derive a rotation law for a dark halo. Comparison with Sc galaxies having normal disk surface brightnesses suggests that the halo in NGC 5963 is more concentrated than in normal Scs with similar rotation curves. The origin of the low surface brightness of the disk is discussed.

  15. Surveys of the Milky Way and Magellanic System in the λ21-cm line of atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Dickey, J. M.

    2012-02-01

    In the next three years, surveys of the Northern and Southern skies using focal plane arrays on aperture synthesis radio telescopes will lead to a breakthrough in our knowledge of the warm and cool atomic phases of the interstellar medium and their relationship with the diffuse molecular gas. The sensitivity and resolution of these surveys will give an order of magnitude or more improvement over existing interstellar medium data. The GASKAP (South) and GAMES (North) projects together constitute a complete survey of the Milky Way plane and the Magellanic Clouds and Stream in both emission and absorption in the H I 21-cm line and the OH 18-cm lines. The overall goal of this project is to understand the mechanism of galaxy evolution, through a detailed tracing of the astrophysical processes that drive the cycle of star formation in very different environments. Comparison of 21-cm emission and absorption highlights the transition from the warm, diffuse medium to cool clouds. Tracing turbulence in the Magellanic Stream shows how extra-galactic gas makes the difficult passage through the halo to replenish the disk. Finally, high resolution images of OH masers trace outflows from evolved stars that enrich the medium with heavy elements. To understand how the Milky Way was assembled and how it has evolved since, the speed and efficiency of these processes must be measured, as functions of Galactic radius and height above the plane. Observations of similar processes in the Magellanic Clouds show how differently they might have worked in conditions typical of the early universe.

  16. What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization

    SciTech Connect

    Pober, Jonathan C.; Morales, Miguel F.; Liu, Adrian; McQuinn, Matthew; Parsons, Aaron R.; Dillon, Joshua S.; Hewitt, Jacqueline N.; Tegmark, Max; Aguirre, James E.; Bowman, Judd D.; Jacobs, Daniel C.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Werthimer, Dan J.

    2014-02-20

    A number of experiments are currently working toward a measurement of the 21 cm signal from the epoch of reionization (EoR). Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by the next generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We focus primarily on an instrument modeled after the ∼0.1 km{sup 2} collecting area Hydrogen Epoch of Reionization Array concept design and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described 'wedge' footprint in k space. Uncertainties in the reionization history are accounted for using a series of simulations that vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the intergalactic medium. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that 0.1 km{sup 2} of collecting area is enough to ensure a very high significance (≳ 30σ) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented.

  17. Mapping tillage intensity by integrating multiple remote sensing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tillage practices play an important role in the sustainable agriculture system. Conservative tillage practice can help to reduce soil erosion, increase soil fertility and improve water quality. Tillage practices could be applied at different times with different intensity depending on the local weat...

  18. Data-Intensive Memory-Map simulator and runtime

    2012-05-01

    DI-MMAP is a simulator for modeling the performance of next generation non-volatile random access memory technologies (NVRAM) and a high-perfromance memory-map runtime for the Linux operating system. It is implemented as a device driver for the Linux operating system. It will be used by algorithm designers to unserstand the impact of future NVRAM on their algorithms and will be used by application developers for high-performance access to NVRAM storage.

  19. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA - 4/07

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  20. Mapping of laser diode radiation intensity by atomic-force microscopy

    NASA Astrophysics Data System (ADS)

    Alekseev, P. A.; Dunaevskii, M. S.; Slipchenko, S. O.; Podoskin, A. A.; Tarasov, I. S.

    2015-09-01

    The distribution of the intensity of laser diode radiation has been studied using an original method based on atomic-force microscopy (AFM). It is shown that the laser radiation intensity in both the near field and transition zone of a high-power semiconductor laser under room-temperature conditions can be mapped by AFM at a subwavelength resolution. The obtained patterns of radiation intensity distribution agree with the data of modeling and the results of near-field optical microscopy measurements.

  1. A Giant Metrewave Radio Telescope search for associated H I 21 cm absorption in high-redshift flat-spectrum sources

    NASA Astrophysics Data System (ADS)

    Aditya, J. N. H. S.; Kanekar, Nissim; Kurapati, Sushma

    2016-02-01

    We report results from a Giant Metrewave Radio Telescope search for `associated' redshifted H I 21 cm absorption from 24 active galactic nuclei (AGNs), at 1.1 < z < 3.6, selected from the Caltech-Jodrell Bank Flat-spectrum (CJF) sample. 22 out of 23 sources with usable data showed no evidence of absorption, with typical 3σ optical depth detection limits of ≈0.01 at a velocity resolution of ≈30 km s-1. A single tentative absorption detection was obtained at z ≈ 3.530 towards TXS 0604+728. If confirmed, this would be the highest redshift at which H I 21 cm absorption has ever been detected. Including 29 CJF sources with searches for redshifted H I 21 cm absorption in the literature, mostly at z < 1, we construct a sample of 52 uniformly selected flat-spectrum sources. A Peto-Prentice two-sample test for censored data finds (at ≈3σ significance) that the strength of H I 21 cm absorption is weaker in the high-z sample than in the low-z sample; this is the first statistically significant evidence for redshift evolution in the strength of H I 21 cm absorption in a uniformly selected AGN sample. However, the two-sample test also finds that the H I 21 cm absorption strength is higher in AGNs with low ultraviolet or radio luminosities, at ≈3.4σ significance. The fact that the higher luminosity AGNs of the sample typically lie at high redshifts implies that it is currently not possible to break the degeneracy between AGN luminosity and redshift evolution as the primary cause of the low H I 21 cm opacities in high-redshift, high-luminosity AGNs.

  2. Effects of Antenna Beam Chromaticity on Redshifted 21 cm Power Spectrum and Implications for Hydrogen Epoch of Reionization Array

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Parsons, Aaron R.; DeBoer, David R.; Bowman, Judd D.; Ewall-Wice, Aaron M.; Neben, Abraham R.; Patra, Nipanjana

    2016-07-01

    Unaccounted for systematics from foregrounds and instruments can severely limit the sensitivity of current experiments from detecting redshifted 21 cm signals from the Epoch of Reionization (EoR). Upcoming experiments are faced with a challenge to deliver more collecting area per antenna element without degrading the data with systematics. This paper and its companions show that dishes are viable for achieving this balance using the Hydrogen Epoch of Reionization Array (HERA) as an example. Here, we specifically identify spectral systematics associated with the antenna power pattern as a significant detriment to all EoR experiments which causes the already bright foreground power to leak well beyond ideal limits and contaminate the otherwise clean EoR signal modes. A primary source of this chromaticity is reflections in the antenna-feed assembly and between structures in neighboring antennas. Using precise foreground simulations taking wide-field effects into account, we provide a generic framework to set cosmologically motivated design specifications on these reflections to prevent further EoR signal degradation. We show that HERA will not be impeded by such spectral systematics and demonstrate that even in a conservative scenario that does not perform removal of foregrounds, HERA will detect the EoR signal in line-of-sight k-modes, {k}\\parallel ≳ 0.2 h Mpc‑1, with high significance. Under these conditions, all baselines in a 19-element HERA layout are capable of detecting EoR over a substantial observing window on the sky.

  3. Calibration requirements for detecting the 21 cm epoch of reionization power spectrum and implications for the SKA

    NASA Astrophysics Data System (ADS)

    Barry, N.; Hazelton, B.; Sullivan, I.; Morales, M. F.; Pober, J. C.

    2016-09-01

    21 cm epoch of reionization (EoR) observations promise to transform our understanding of galaxy formation, but these observations are impossible without unprecedented levels of instrument calibration. We present end-to-end simulations of a full EoR power spectrum (PS) analysis including all of the major components of a real data processing pipeline: models of astrophysical foregrounds and EoR signal, frequency-dependent instrument effects, sky-based antenna calibration, and the full PS analysis. This study reveals that traditional sky-based per-frequency antenna calibration can only be implemented in EoR measurement analyses if the calibration model is unrealistically accurate. For reasonable levels of catalogue completeness, the calibration introduces contamination in otherwise foreground-free PS modes, precluding a PS measurement. We explore the origin of this contamination and potential mitigation techniques. We show that there is a strong joint constraint on the precision of the calibration catalogue and the inherent spectral smoothness of antennas, and that this has significant implications for the instrumental design of the SKA (Square Kilometre Array) and other future EoR observatories.

  4. Calibration Requirements for Detecting the 21 cm Epoch of Reionization Power Spectrum and Implications for the SKA

    NASA Astrophysics Data System (ADS)

    Barry, N.; Hazelton, B.; Sullivan, I.; Morales, M. F.; Pober, J. C.

    2016-06-01

    21 cm Epoch of Reionization observations promise to transform our understanding of galaxy formation, but these observations are impossible without unprecedented levels of instrument calibration. We present end-to-end simulations of a full EoR power spectrum analysis including all of the major components of a real data processing pipeline: models of astrophysical foregrounds and EoR signal, frequency-dependent instrument effects, sky-based antenna calibration, and the full PS analysis. This study reveals that traditional sky-based per-frequency antenna calibration can only be implemented in EoR measurement analyses if the calibration model is unrealistically accurate. For reasonable levels of catalogue completeness, the calibration introduces contamination in otherwise foreground-free power spectrum modes, precluding a PS measurement. We explore the origin of this contamination and potential mitigation techniques. We show that there is a strong joint constraint on the precision of the calibration catalogue and the inherent spectral smoothness of antennae, and that this has significant implications for the instrumental design of the SKA and other future EoR observatories.

  5. 2MTF III. H I 21 cm observations of 1194 spiral galaxies with the Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Masters, Karen L.; Crook, Aidan; Hong, Tao; Jarrett, T. H.; Koribalski, Bärbel S.; Macri, Lucas; Springob, Christopher M.; Staveley-Smith, Lister

    2014-09-01

    We present H I 21 cm observations of 1194 galaxies out to a redshift of 10 000 km s-1 selected as inclined spirals (i ≳ 60°) from the 2MASS redshift survey. These observations were carried out at the National Radio Astronomy Observatory Robert C. Byrd Green Bank Telescope (GBT). This observing programme is part of the 2MASS Tully-Fisher (2MTF) survey. This project will combine H I widths from these GBT observations with those from further dedicated observing at the Parkes Telescope, from the Arecibo Legacy Fast Arecibo L-band Feed Array survey at Arecibo, and S/N > 10 and spectral resolution vres < 10 km s-1 published widths from a variety of telescopes. We will use these H I widths along with 2MASS photometry to estimate Tully-Fisher distances to nearby spirals and investigate the peculiar velocity field of the local Universe. In this paper, we report on detections of neutral hydrogen in emission in 727 galaxies, and measure good signal to noise and symmetric H I global profiles suitable for use in the Tully-Fisher relation in 484.

  6. Mapping and analysing cropland use intensity from a NPP perspective

    NASA Astrophysics Data System (ADS)

    Niedertscheider, Maria; Kastner, Thomas; Fetzel, Tamara; Haberl, Helmut; Kroisleitner, Christine; Plutzar, Christoph; Erb, Karl-Heinz

    2016-01-01

    Meeting expected surges in global biomass demand while protecting pristine ecosystems likely requires intensification of current croplands. Yet many uncertainties relate to the potentials for cropland intensification, mainly because conceptualizing and measuring land use intensity is intricate, particularly at the global scale. We present a spatially explicit analysis of global cropland use intensity, following an ecological energy flow perspective. We analyze (a) changes of net primary production (NPP) from the potential system (i.e. assuming undisturbed vegetation) to croplands around 2000 and relate these changes to (b) inputs of (N) fertilizer and irrigation and (c) to biomass outputs, allowing for a three dimensional focus on intensification. Globally the actual NPP of croplands, expressed as per cent of their potential NPP (NPPact%), amounts to 77%. A mix of socio-economic and natural factors explains the high spatial variation which ranges from 22.6% to 416.0% within the inner 95 percentiles. NPPact% is well below NPPpot in many developing, (Sub-) Tropical regions, while it massively surpasses NPPpot on irrigated drylands and in many industrialized temperate regions. The interrelations of NPP losses (i.e. the difference between NPPact and NPPpot), agricultural inputs and biomass harvest differ substantially between biogeographical regions. Maintaining NPPpot was particularly N-intensive in forest biomes, as compared to cropland in natural grassland biomes. However, much higher levels of biomass harvest occur in forest biomes. We show that fertilization loads correlate with NPPact% linearly, but the relation gets increasingly blurred beyond a level of 125 kgN ha-1. Thus, large potentials exist to improve N-efficiency at the global scale, as only 10% of global croplands are above this level. Reallocating surplus N could substantially reduce NPP losses by up to 80% below current levels and at the same time increase biomass harvest by almost 30%. However, we

  7. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  8. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery

    NASA Astrophysics Data System (ADS)

    Gunawardena, Athula D. A.; D'Souza, Warren D.; Goadrich, Laura D.; Meyer, Robert R.; Sorensen, Kelly J.; Naqvi, Shahid A.; Shi, Leyuan

    2006-05-01

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  9. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    PubMed Central

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  10. Updating Historical Maps of Malaria Transmission Intensity in East Africa Using Remote Sensing

    PubMed Central

    Omumbo, J.A.; Hay, S.I.; Goetz, S.J.; Snow, R.W.; Rogers, D.J.

    2013-01-01

    Remotely sensed imagery has been used to update and improve the spatial resolution of malaria transmission intensity maps in Tanzania, Uganda, and Kenya. Discriminant analysis achieved statistically robust agreements between historical maps of the intensity of malaria transmission and predictions based on multitemporal meteorological satellite sensor data processed using temporal Fourier analysis. The study identified land surface temperature as the best predictor of transmission intensity. Rainfall and moisture availability as inferred by cold cloud duration (ccd) and the normalized difference vegetation index (ndvi), respectively, were identified as secondary predictors of transmission intensity. Information on altitude derived from a digital elevation model significantly improved the predictions. “Malaria-free” areas were predicted with an accuracy of 96 percent while areas where transmission occurs only near water, moderate malaria areas, and intense malaria transmission areas were predicted with accuracies of 90 percent, 72 percent, and 87 percent, respectively. The importance of such maps for rationalizing malaria control is discussed, as is the potential contribution of the next generation of satellite sensors to these mapping efforts. PMID:23814324

  11. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    SciTech Connect

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  12. A FOURTH H I 21 cm ABSORPTION SYSTEM IN THE SIGHT LINE OF MG J0414+0534: A RECORD FOR INTERVENING ABSORBERS

    SciTech Connect

    Tanna, A.; Webb, J. K.; Curran, S. J.; Whiting, M. T.; Bignell, C.

    2013-08-01

    We report the detection of a strong H I 21 cm absorption system at z = 0.5344, as well as a candidate system at z = 0.3389, in the sight line toward the z = 2.64 quasar MG J0414+0534. This, in addition to the absorption at the host redshift and the other two intervening absorbers, takes the total to four (possibly five). The previous maximum number of 21 cm absorbers detected along a single sight line is two and so we suspect that this number of gas-rich absorbers is in some way related to the very red color of the background source. Despite this, no molecular gas (through OH absorption) has yet been detected at any of the 21 cm redshifts, although, from the population of 21 cm absorbers as a whole, there is evidence for a weak correlation between the atomic line strength and the optical-near-infrared color. In either case, the fact that so many gas-rich galaxies (likely to be damped Ly{alpha} absorption systems) have been found along a single sight line toward a highly obscured source may have far-reaching implications for the population of faint galaxies not detected in optical surveys, a possibility which could be addressed through future wide-field absorption line surveys with the Square Kilometer Array.

  13. Foreground Model and Antenna Calibration Errors in the Measurement of the Sky-averaged λ21 cm Signal at z~ 20

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-01

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ~ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered "spectrally smooth"). Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ~fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  14. Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey

    PubMed Central

    Tanji, Kazuyo; Leopold, David; Ye, Frank; Zhu, Charles; Malloy, Megan; Saunders, Richard C.; Mishkin, Mortimer

    2009-01-01

    The monkey’s auditory cortex includes a core region on the supratemporal plane (STP) made up of the tonotopically organized areas A1, R, and RT, together with a surrounding belt and a lateral parabelt region. The functional studies that yielded the tonotopic maps and corroborated the anatomical division into core, belt, and parabelt typically used low-amplitude pure tones that were often restricted to threshold-level intensities. Here we used functional magnetic resonance imaging in awake rhesus monkeys to determine whether, and if so how, the tonotopic maps and the pattern of activation in core, belt, and parabelt are affected by systematic changes in sound intensity. Blood oxygenation level-dependent (BOLD) responses to groups of low- and high-frequency pure tones 3-4 octaves apart were measured at multiple sound intensity levels. The results revealed tonotopic maps in the auditory core that reversed at the putative areal boundaries between A1 and R and between R and RT. Although these reversals of the tonotopic representations were present at all intensity levels, the lateral spread of activation depended on sound amplitude, with increasing recruitment of the adjacent belt areas as the intensities increased. Tonotopic organization along the STP was also evident in frequency-specific deactivation (i.e. “negative BOLD”), an effect that was intensity-specific as well. Regions of positive and negative BOLD were spatially interleaved, possibly reflecting lateral inhibition of high frequency areas during activation of adjacent low frequency areas, and vice versa. These results, which demonstrate the strong influence of tonal amplitude on activation levels, identify sound intensity as an important adjunct parameter for mapping the functional architecture of auditory cortex. PMID:19631273

  15. Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey.

    PubMed

    Tanji, Kazuyo; Leopold, David A; Ye, Frank Q; Zhu, Charles; Malloy, Megan; Saunders, Richard C; Mishkin, Mortimer

    2010-01-01

    The monkey's auditory cortex includes a core region on the supratemporal plane (STP) made up of the tonotopically organized areas A1, R, and RT, together with a surrounding belt and a lateral parabelt region. The functional studies that yielded the tonotopic maps and corroborated the anatomical division into core, belt, and parabelt typically used low-amplitude pure tones that were often restricted to threshold-level intensities. Here we used functional magnetic resonance imaging in awake rhesus monkeys to determine whether, and if so how, the tonotopic maps and the pattern of activation in core, belt, and parabelt are affected by systematic changes in sound intensity. Blood oxygenation level-dependent (BOLD) responses to groups of low- and high-frequency pure tones 3-4 octaves apart were measured at multiple sound intensity levels. The results revealed tonotopic maps in the auditory core that reversed at the putative areal boundaries between A1 and R and between R and RT. Although these reversals of the tonotopic representations were present at all intensity levels, the lateral spread of activation depended on sound amplitude, with increasing recruitment of the adjacent belt areas as the intensities increased. Tonotopic organization along the STP was also evident in frequency-specific deactivation (i.e. "negative BOLD"), an effect that was intensity-specific as well. Regions of positive and negative BOLD were spatially interleaved, possibly reflecting lateral inhibition of high-frequency areas during activation of adjacent low-frequency areas, and vice versa. These results, which demonstrate the strong influence of tonal amplitude on activation levels, identify sound intensity as an important adjunct parameter for mapping the functional architecture of auditory cortex. PMID:19631273

  16. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  17. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography

    PubMed Central

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A.; White, Thomas A.; Spence, John C. H.; Chapman, Henry N.; Barty, Anton

    2014-01-01

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure ‘three-dimensional merging’. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies. PMID:24914160

  18. Intensity Mapping across Cosmic Times with the Lyα Line

    NASA Astrophysics Data System (ADS)

    Pullen, Anthony R.; Doré, Olivier; Bock, Jamie

    2014-05-01

    We present a quantitative model of Lyα emission throughout cosmic history and determine the prospects for intensity mapping spatial fluctuations in the Lyα signal. Since (1) our model assumes at z > 6 the minimum star formation required to sustain reionization and (2) is based at z < 6 on a luminosity function (LF) extrapolated from the few observed bright Lyα emitters, this should be considered a lower limit. Mapping the line emission allows probes of reionization, star formation, and large-scale structure (LSS) as a function of redshift. While Lyα emission during reionization has been studied, we also predict the postreionization signal to test predictions of the intensity and motivate future intensity mapping probes of reionization. We include emission from massive dark matter halos and the intergalactic medium (IGM) in our model. We find agreement with current, measured LFs of Lyα emitters at z < 8. However, diffuse IGM emission, not associated with Lyα emitters, dominates the intensity up to z ~ 10. While our model is applicable for deep-optical or near-infrared observers like the James Webb Space Telescope, only intensity mapping will detect the diffuse IGM emission. We also construct a three-dimensional power spectrum model of the Lyα emission. Finally, we consider the prospects of an intensity mapper for measuring Lyα fluctuations while identifying interloper contamination for removal. Our results suggest that while the reionization signal is challenging, Lyα fluctuations can be an interesting new probe of LSS at late times when used in conjunction with other lines, e.g., Hα, to monitor low-redshift foreground confusion.

  19. Intensity mapping across cosmic times with the Lyα line

    SciTech Connect

    Pullen, Anthony R.; Doré, Olivier; Bock, Jamie

    2014-05-10

    We present a quantitative model of Lyα emission throughout cosmic history and determine the prospects for intensity mapping spatial fluctuations in the Lyα signal. Since (1) our model assumes at z > 6 the minimum star formation required to sustain reionization and (2) is based at z < 6 on a luminosity function (LF) extrapolated from the few observed bright Lyα emitters, this should be considered a lower limit. Mapping the line emission allows probes of reionization, star formation, and large-scale structure (LSS) as a function of redshift. While Lyα emission during reionization has been studied, we also predict the postreionization signal to test predictions of the intensity and motivate future intensity mapping probes of reionization. We include emission from massive dark matter halos and the intergalactic medium (IGM) in our model. We find agreement with current, measured LFs of Lyα emitters at z < 8. However, diffuse IGM emission, not associated with Lyα emitters, dominates the intensity up to z ∼ 10. While our model is applicable for deep-optical or near-infrared observers like the James Webb Space Telescope, only intensity mapping will detect the diffuse IGM emission. We also construct a three-dimensional power spectrum model of the Lyα emission. Finally, we consider the prospects of an intensity mapper for measuring Lyα fluctuations while identifying interloper contamination for removal. Our results suggest that while the reionization signal is challenging, Lyα fluctuations can be an interesting new probe of LSS at late times when used in conjunction with other lines, e.g., Hα, to monitor low-redshift foreground confusion.

  20. Infrared mapping of ultrasound fields generated by medical transducers: Feasibility of determining absolute intensity levels

    PubMed Central

    Khokhlova, Vera A.; Shmeleva, Svetlana M.; Gavrilov, Leonid R.; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam

    2013-01-01

    Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm2 to at least 50 W/cm2. Significantly higher intensities could be measured simply by reducing the duty cycle. PMID:23927199

  1. Infrared mapping of ultrasound fields generated by medical transducers: feasibility of determining absolute intensity levels.

    PubMed

    Khokhlova, Vera A; Shmeleva, Svetlana M; Gavrilov, Leonid R; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam

    2013-08-01

    Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm(2) to at least 50 W/cm(2). Significantly higher intensities could be measured simply by reducing the duty cycle. PMID:23927199

  2. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    PubMed

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans. PMID:16675867

  3. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  4. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  5. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  6. Foreground contamination in Lyα intensity mapping during the epoch of reionization

    SciTech Connect

    Gong, Yan; Cooray, Asantha; Silva, Marta; Santos, Mario G.

    2014-04-10

    The intensity mapping of Lyα emission during the epoch of reionization will be contaminated by foreground emission lines from lower redshifts. We calculate the mean intensity and the power spectrum of Lyα emission at z ∼ 7 and estimate the uncertainties according to the relevant astrophysical processes. We find that the low-redshift emission lines from 6563 Å Hα, 5007 Å [O III], and 3727 Å [O II] will be strong contaminants on the observed Lyα power spectrum. We make use of both the star formation rate and luminosity functions to estimate the mean intensity and power spectra of the three foreground lines at z ∼ 0.5 for Hα, z ∼ 0.9 for [O III], and z ∼ 1.6 for [O II], as they will contaminate the Lyα emission at z ∼ 7. The [O II] line is found to be the strongest. We analyze the masking of the bright survey pixels with a foreground line above some line intensity threshold as a way to reduce the contamination in an intensity mapping survey. We find that the foreground contamination can be neglected if we remove pixels with fluxes above 1.4 × 10{sup –20} W m{sup –2}.

  7. H I Structure and Topology of the Galaxy Revealed by the I-GALFA H I 21-cm Line Survey

    NASA Astrophysics Data System (ADS)

    Koo, Bon-Chul; Park, G.; Cho, W.; Gibson, S. J.; Kang, J.; Douglas, K. A.; Peek, J. E. G.; Korpela, E. J.; Heiles, C. E.

    2011-05-01

    The I-GALFA survey mapping all the H I in the inner Galactic disk visible to the Arecibo 305m telescope within 10 degrees of the Galactic plane (longitudes of 32 to 77 degrees at b = 0) completed observations in 2009 September and will soon be made publicly available. The high (3.4 arcmin) resolution and tremendous sensitivity of the survey offer a great opportunity to observe the fine details of H I both in the inner and in the far outer Galaxy. The reduced HI column density maps show that the HI structure is highly filamentary and clumpy, pervaded by shell-like structures, vertical filaments, and small clumps. By inspecting individual maps, we have found 36 shell candidates of angular sizes ranging from 0.4 to 12 degrees, half of which appear to be expanding. In order to characterize the filamentary/clumpy morphology of the HI structure, we have carried out statistical analyses of selected areas representing the spiral arms in the inner and outer Galaxy. Genus statistics that can distinguish the ``meatball'' and ``swiss-cheese'' topologies show that the HI topology is clump-like in most regions. The two-dimensional Fourier analysis further shows the HI structures are filamentary and mainly parallel to the plane in the outer Galaxy. We also examine the level-crossing statistics, the results of which are described in detail in an accompanying poster by Park et al.

  8. Constraining the population of radio-loud active galactic nuclei at high redshift with the power spectrum of the 21 cm Forest

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Mesinger, Andrei; Hewitt, Jacqueline N.

    2014-06-01

    The 21 cm forest, the absorption by the intergalactic medium (IGM) towards a high redshift radio-loud source, is a probe of the thermal state of the IGM. To date, the literature has focused on line-of-sight spectral studies of a single quasar known to have a large redshift. We instead examine many sources in a wide field of view, and show that the imprint from the 21 cm forest absorption of these sources is detectible in the power spectrum. The properties of the power spectrum can reveal information on the population of the earliest radio loud sources that may have existed during the pre-reionization epoch at z>10.Using semi-numerical simulations of the IGM and a semi-empirical source population, we show that the 21 cm forest dominates, in a distinctive region of Fourier space, the brightness temperature power spectrum that many contemporary experiments aim to measure. In particular, the forest dominates the diffuse emission on smaller spatial scales along the line of sight. Exploiting this separation, one may constrain the IGM thermal history, such as heating by the first X-ray sources, on large spatial scales and the absorption of radio loud active galactic nuclei on small ones.Using realistic simulations of noise and foregrounds, we show that planned instruments on the scale of the Hydrogen Epoch of Reionization Array (HERA) with a collecting area of one tenth of a square kilometer can detect the 21cm forest in this small spatial scale region with high signal to noise. We develop an analytic toy model for the signal and explore its detectability over a large range of thermal histories and potential high redshift source scenarios.

  9. COMPLETE IONIZATION OF THE NEUTRAL GAS: WHY THERE ARE SO FEW DETECTIONS OF 21 cm HYDROGEN IN HIGH-REDSHIFT RADIO GALAXIES AND QUASARS

    SciTech Connect

    Curran, S. J.; Whiting, M. T.

    2012-11-10

    From the first published z {approx}> 3 survey of 21 cm absorption within the hosts of radio galaxies and quasars, Curran et al. found an apparent dearth of cool neutral gas at high redshift. From a detailed analysis of the photometry, each object is found to have a {lambda} = 1216 A continuum luminosity in excess of L {sub 1216} {approx} 10{sup 23} W Hz{sup -1}, a critical value above which 21 cm has never been detected at any redshift. At these wavelengths, and below, hydrogen is excited above the ground state so that it cannot absorb in 21 cm. In order to apply the equation of photoionization equilibrium, we demonstrate that this critical value also applies to the ionizing ({lambda} {<=} 912 A) radiation. We use this to show, for a variety of gas density distributions, that upon placing a quasar within a galaxy of gas, there is always an ultraviolet luminosity above which all of the large-scale atomic gas is ionized. While in this state, the hydrogen cannot be detected or engage in star formation. Applying the mean ionizing photon rate of all of the sources searched, we find, using canonical values for the gas density and recombination rate coefficient, that the observed critical luminosity gives a scale length (3 kpc) similar that of the neutral hydrogen (H I) in the Milky Way, a large spiral galaxy. Thus, this simple yet physically motivated model can explain the critical luminosity (L {sub 912} {approx} L {sub 1216} {approx} 10{sup 23} W Hz{sup -1}), above which neutral gas is not detected. This indicates that the non-detection of 21 cm absorption is not due to the sensitivity limits of current radio telescopes, but rather that the lines of sight to the quasars, and probably the bulk of the host galaxies, are devoid of neutral gas.

  10. Lya intensity mapping: current observational results from SDSS/BOSS and its future potential.

    NASA Astrophysics Data System (ADS)

    Croft, Rupert A.; Miralda-Escudé, Jordi; Zheng, Zheng

    2016-01-01

    Over the past 10 years, widefield optical survey telescopes have taken several million fiber spectra of objects in the night sky. This enormous dataset can be used to carry out optical intensity mapping measurements right now, as well as informing and motivating future dedicated instruments. Using cross-correlation techniques have made measurements of the large-scale structure of the Universe in the hydrogen Lyman-alpha line from SDSS/BOSS fiber spectra. We compare our results to the structure expected in the LambdaCDM cosmological model, and make the first global estimate of the Lyman-alpha luminosity density of the Universe. We discuss how lessons learned during our analysis can be applied to future experiments, and which observational tracers will be useful for further applications of these techniques. We also show how intensity mapping could dramatically enhance our ability to make measurements of new effects in galaxy clustering, such as general and special relativistic distortions.

  11. Recent Results from Broad-Band Intensity Mapping Measurements of Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael B.; CIBER, Herschel-SPIRE

    2016-01-01

    Intensity mapping integrates the total emission in a given spectral band over the universe's history. Tomographic measurements of cosmic structure can be performed using specific line tracers observed in narrow bands, but a wealth of information is also available from broad-band observations performed by instruments capable of capturing high-fidelity, wide-angle images of extragalactic emission. Sensitive to the continuum emission from faint and diffuse sources, these broad-band measurements provide a view on cosmic structure traced by components not readily detected in point source surveys. After accounting for measurement effects and astrophysical foregrounds, the angular power spectra of such data can be compared to predictions from models to yield powerful insights into the history of cosmic structure formation. This talk will highlight some recent measurements of large scale structure performed using broad-band intensity mapping methods that have given new insights on faint, distant, and diffuse components in the extragalactic background light.

  12. Managing Hardware Configurations and Data Products for the Canadian Hydrogen Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hincks, A. D.; Shaw, J. R.; Chime Collaboration

    2015-09-01

    The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is an ambitious new radio telescope project for measuring cosmic expansion and investigating dark energy. Keeping good records of both physical configuration of its 1280 antennas and their analogue signal chains as well as the ˜100 TB of data produced daily from its correlator will be essential to the success of CHIME. In these proceedings we describe the database-driven software we have developed to manage this complexity.

  13. Connecting CO Intensity Mapping to Molecular Gas and Star Formation in the Epoch of Galaxy Assembly

    NASA Astrophysics Data System (ADS)

    Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; Church, Sarah E.

    2016-02-01

    Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1-0) line from galaxies at z˜ 2.4-2.8, based on a parameterized model for the galaxy-halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the {L}{IR}-{L}{CO} relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. By probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.

  14. Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly

    DOE PAGESBeta

    Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; Church, Sarah E.

    2016-01-29

    Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less

  15. Alternative Stimulation Intensities for Mapping Cortical Motor Area with Navigated TMS.

    PubMed

    Kallioniemi, Elisa; Julkunen, Petro

    2016-05-01

    Navigated transcranial magnetic stimulation (nTMS) is becoming a popular tool in pre-operative mapping of functional motor areas. The stimulation intensities used in the mapping are commonly suprathreshold intensities with respect to the patient's resting motor threshold (rMT). There is no consensus on which suprathreshold intensity should be used nor on the optimal criteria for selecting the appropriate stimulation intensity (SI). In this study, the left motor cortices of 12 right-handed volunteers (8 males, age 24-61 years) were mapped using motor evoked potentials with an SI of 110 and 120 % of rMT and with an upper threshold (UT) estimated by the Mills-Nithi algorithm. The UT was significantly lower than 120 % of rMT (p < 0.001), while no significant difference was observed between UT and 110 % of rMT (p = 0.112). The representation sizes followed a similar trend, i.e. areas computed based on UT (5.9 cm(2)) and 110 % of rMT (5.0 cm(2)) being smaller than that of 120 % of rMT (8.8 cm(2)) (p ≤ 0.001). There was no difference in representation sizes between 110 % of rMT and UT. The variance in representation size was found to be significantly lower with UT compared to 120 % of rMT (p = 0.048, uncorrected), while there was no difference between 110 % of rMT and UT or 120 % of rMT. Indications of lowest inter-individual variation in representation size were observed with UT; this is possibly due to the fact that it takes into account the individual input-output characteristics of the motor cortex. Therefore, the UT seems to be a good option for SI in motor mapping applications to outline functional motor areas with nTMS and it could potentially reduce the inter-individual variation caused by the selection of SI in motor mapping in pre-surgical applications and radiosurgery planning. PMID:26830768

  16. Mapping cropland-use intensity across Europe using MODIS NDVI time series

    NASA Astrophysics Data System (ADS)

    Estel, Stephan; Kuemmerle, Tobias; Levers, Christian; Baumann, Matthias; Hostert, Patrick

    2016-02-01

    Global agricultural production will likely need to increase in the future due to population growth, changing diets, and the rising importance of bioenergy. Intensifying already existing cropland is often considered more sustainable than converting more natural areas. Unfortunately, our understanding of cropping patterns and intensity is weak, especially at broad geographic scales. We characterized and mapped cropping systems in Europe, a region containing diverse cropping systems, using four indicators: (a) cropping frequency (number of cropped years), (b) multi-cropping (number of harvests per year), (c) fallow cycles, and (d) crop duration ratio (actual time under crops) based on the MODIS Normalized Difference Vegetation Index (NDVI) time series from 2000 to 2012. Second, we used these cropping indicators and self-organizing maps to identify typical cropping systems. The resulting six clusters correspond well with other indicators of agricultural intensity (e.g., nitrogen input, yields) and reveal substantial differences in cropping intensity across Europe. Cropping intensity was highest in Germany, Poland, and the eastern European Black Earth regions, characterized by high cropping frequency, multi-cropping and a high crop duration ratio. Contrarily, we found lowest cropping intensity in eastern Europe outside the Black Earth region, characterized by longer fallow cycles. Our approach highlights how satellite image time series can help to characterize spatial patterns in cropping intensity—information that is rarely surveyed on the ground and commonly not included in agricultural statistics: our clustering approach also shows a way forward to reduce complexity when measuring multiple indicators. The four cropping indicators we used could become part of continental-scale agricultural monitoring in order to identify target regions for sustainable intensification, where trade-offs between intensification and the environmental should be explored.

  17. Intensity distribution and isoseismal maps for the Nisqually, Washington, earthquake of 28 February 2001

    USGS Publications Warehouse

    Dewey, James W.; Hopper, Margaret G.; Wald, David J.; Quitoriano, Vincent; Adams, Elizabeth R.

    2002-01-01

    We present isoseismal maps, macroseismic intensities, and community summaries of damage for the MW=6.8 Nisqually, Washington, earthquake of 28 February, 2001. For many communities, two types of macroseismic intensity are assigned, the traditional U.S. Geological Survey Modified Mercalli Intensities (USGS MMI) and a type of intensity newly introduced with this paper, the USGS Reviewed Community Internet Intensity (RCII). For most communities, the RCII is a reviewed version of the Community Internet Intensity (CII) of Wald and others (1999). For some communities, RCII is assigned from such non-CII sources as press reports, engineering reports, and field reconnaissance observations. We summarize differences between procedures used to assign RCII and USGS MMI, and we show that the two types of intensity are nonetheless very similar for the Nisqually earthquake. We do not see evidence for systematic differences between RCII and USGS MMI that would approach one intensity unit, at any level of shaking, but we document a tendency for the RCII to be slightly lower than MMI in regions of low intensity and slightly higher than MMI in regions of high intensity. The highest RCII calculated for the Nisqually earthquake is 7.6, calculated for zip code 98134, which includes the ?south of downtown? (Sodo) area of Seattle and Harbor Island. By comparison, we assigned a traditional USGS MMI 8 to the Sodo area of Seattle. In all, RCII of 6.5 and higher were assigned to 58 zip-code regions. At the lowest intensities, the Nisqually earthquake was felt over an area of approximately 350,000 square km (approximately 135,000 square miles) in Washington, Oregon, Idaho, Montana, and southern British Columbia, Canada. On the basis of macroseismic effects, we infer that shaking in the southern Puget Sound region was somewhat less for the 2001 Nisqually earthquake than for the Puget Sound earthquake of April 13, 1949, which had nearly the same hypocenter and magnitude. Allowing for differences

  18. The high-redshift star formation history from carbon-monoxide intensity maps

    NASA Astrophysics Data System (ADS)

    Breysse, Patrick C.; Kovetz, Ely D.; Kamionkowski, Marc

    2016-03-01

    We demonstrate how cosmic star formation history can be measured with one-point statistics of carbon-monoxide intensity maps. Using a P(D) analysis, the luminosity function of CO-emitting sources can be inferred from the measured one-point intensity PDF. The star formation rate density (SFRD) can then be obtained, at several redshifts, from the CO luminosity density. We study the effects of instrumental noise, line foregrounds, and target redshift, and obtain constraints on the CO luminosity density of the order of 10 per cent. We show that the SFRD uncertainty is dominated by that of the model connecting CO luminosity and star formation. For pessimistic estimates of this model uncertainty, we obtain an error of the order of 50 per cent on SFRD for surveys targeting redshifts between two and seven with reasonable noise and foregrounds included. However, comparisons between intensity maps and galaxies could substantially reduce this model uncertainty. In this case, our constraints on SFRD at these redshifts improve to roughly 5 - 10 per cent, which is highly competitive with current measurements.

  19. H I SHELLS AND SUPERSHELLS IN THE I-GALFA H I 21 cm LINE SURVEY. I. FAST-EXPANDING H I SHELLS ASSOCIATED WITH SUPERNOVA REMNANTS

    SciTech Connect

    Park, G.; Koo, B.-C.; Gibson, S. J.; Newton, J. H.; Kang, J.-H.; Lane, D. C.; Douglas, K. A.; Peek, J. E. G.; Korpela, E. J.; Heiles, C.

    2013-11-01

    We search for fast-expanding H I shells associated with Galactic supernova remnants (SNRs) in the longitude range l ≈ 32° to 77° using 21 cm line data from the Inner-Galaxy Arecibo L-band Feed Array (I-GALFA) H I survey. Among the 39 known Galactic SNRs in this region, we find such H I shells in 4 SNRs: W44, G54.4-0.3, W51C, and CTB 80. All four were previously identified in low-resolution surveys, and three of those (excluding G54.4-0.3) were previously studied with the Arecibo telescope. A remarkable new result, however, is the detection of H I emission at both very high positive and negative velocities in W44 from the receding and approaching parts of the H I expanding shell, respectively. This is the first detection of both sides of an expanding shell associated with an SNR in H I 21 cm emission. The high-resolution I-GALFA survey data also reveal a prominent expanding H I shell with high circular symmetry associated with G54.4-0.3. We explore the physical characteristics of four SNRs and discuss what differentiates them from other SNRs in the survey area. We conclude that these four SNRs are likely the remnants of core-collapse supernovae interacting with a relatively dense (∼> 1 cm{sup –3}) ambient medium, and we discuss the visibility of SNRs in the H I 21 cm line.

  20. New limits on 21 cm epoch of reionization from paper-32 consistent with an x-ray heated intergalactic medium at z = 7.7

    SciTech Connect

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, David H. E.; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2014-06-20

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK{sup 2}). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK){sup 2} for k = 0.27 h Mpc{sup –1} at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  1. H I 21cm emission from the subdamped Lyman-α absorber at z = 0.0063 towards PG 1216+069

    NASA Astrophysics Data System (ADS)

    Chengalur, Jayaram N.; Ghosh, T.; Salter, C. J.; Kanekar, N.; Momjian, E.; Keeney, B. A.; Stocke, J. T.

    2015-11-01

    We present H I 21 cm emission observations of the z ˜ 0.006 32 subdamped Lyman-α absorber (sub-DLA) towards PG 1216+069 made using the Arecibo Telescope and the Very Large Array (VLA). The Arecibo H I 21cm spectrum corresponds to an H I mass of ˜3.2 × 107 M⊙, two orders of magnitude smaller than that of a typical spiral galaxy. This is surprising since in the local Universe the cross-section for absorption at high H I column densities is expected to be dominated by spirals. The H I 21cm emission detected in the VLA spectral cube has a low signal-to-noise ratio, and represents only half the total flux seen at Arecibo. Emission from three other sources is detected in the VLA observations, with only one of these sources having an optical counterpart. This group of H I sources appears to be part of complex `W', believed to lie in the background of the Virgo cluster. While several H I cloud complexes have been found in and around the Virgo cluster, it is unclear whether the ram pressure and galaxy harassment processes that are believed to be responsible for the creation of such clouds in a cluster environment are relevant at the location of this cloud complex. The extremely low metallicity of the gas, ˜1/40 solar, also makes it unlikely that the sub-DLA consists of material that has been stripped from a galaxy. Thus, while our results have significantly improved our understanding of the host of this sub-DLA, the origin of the gas cloud remains a mystery.

  2. New Limits on 21 cm Epoch of Reionization from PAPER-32 Consistent with an X-Ray Heated Intergalactic Medium at z = 7.7

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Aguirre, James E.; Ali, Zaki S.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Pat; MacMahon, David H. E.; Manley, Jason R.; Moore, David F.; Pober, Jonathan C.; Stefan, Irina I.; Walbrugh, William P.

    2014-06-01

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK2). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK)2 for k = 0.27 h Mpc-1 at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  3. Dynamic T2-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    NASA Astrophysics Data System (ADS)

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-01

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1°C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T2, since T2 increases linearly in fat during heating. T2-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T2. Calibration of T2-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T2 and temperature with a thermocouple. A positive T2 temperature dependence in bone marrow of 20 ms/°C was observed. Dynamic T2-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  4. Cosmic Structure and Galaxy Evolution through Intensity Mapping of Molecular Gas

    NASA Astrophysics Data System (ADS)

    Bower, Geoffrey C.; Keating, Garrett K.; Marrone, Daniel P.; YT Lee Array Team, SZA Team

    2016-01-01

    The origin and evolution of structure in the Universe is one of the major challenges of observational astronomy. How does baryonic structure trace the underlying dark matter? How have galaxies evolved to produce the present day Universe? A multi-wavelength, multi-tool approach is necessary to provide the complete story of the evolution of structure in the Universe. Intensity mapping, which relies on the ability to detect many objects at once through their integrated emission rather than direct detection of individual objects, is a critical part of this mosaic. In particular, our understanding of the molecular gas component of massive galaxies is being revolutionized by ALMA and EVLA but the population of smaller, star-forming galaxies, which provide the bulk of star formation cannot be individually probed by these instruments.In this talk, I will summarize two intensity mapping experiments to detect molecular gas through the carbon monoxide (CO) rotational transition. We have completed sensitive observations with the Sunyaev-Zel'dovic Array (SZA) telescope at a wavelength of 1 cm that are sensitive to emission at redshifts 2.3 to 3.3. The SZA experiments sets strong limits on models for the CO emission and demonstrates the ability to reject foregrounds and telescope systematics in very deep integrations. I also describe the development of an intensity mapping capability for the Y.T. Lee Array, a 13-element interferometer located on Mauna Loa. In its first phase, this project focuses on detection of CO at redshifts 2.4 - 3.0 with detection via power spectrum and cross-correlation with other surveys. The project includes a major technical upgrade, a new digital correlator and IF electronics component to be deployed in 2015/2016. The Y.T. Lee Array observations will be more sensitive and extend to larger angular scales than the SZA observations.

  5. Coastal and estuarine habitat mapping, using LIDAR height and intensity and multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Chust, Guillem; Galparsoro, Ibon; Borja, Ángel; Franco, Javier; Uriarte, Adolfo

    2008-07-01

    The airborne laser scanning LIDAR (LIght Detection And Ranging) provides high-resolution Digital Terrain Models (DTM) that have been applied recently to the characterization, quantification and monitoring of coastal environments. This study assesses the contribution of LIDAR altimetry and intensity data, topographically-derived features (slope and aspect), and multi-spectral imagery (three visible and a near-infrared band), to map coastal habitats in the Bidasoa estuary and its adjacent coastal area (Basque Country, northern Spain). The performance of high-resolution data sources was individually and jointly tested, with the maximum likelihood algorithm classifier in a rocky shore and a wetland zone; thus, including some of the most extended Cantabrian Sea littoral habitats, within the Bay of Biscay. The results show that reliability of coastal habitat classification was more enhanced with LIDAR-based DTM, compared with the other data sources: slope, aspect, intensity or near-infrared band. The addition of the DTM, to the three visible bands, produced gains of between 10% and 27% in the agreement measures, between the mapped and validation data (i.e. mean producer's and user's accuracy) for the two test sites. Raw LIDAR intensity images are only of limited value here, since they appeared heterogeneous and speckled. However, the enhanced Lee smoothing filter, applied to the LIDAR intensity, improved the overall accuracy measurements of the habitat classification, especially in the wetland zone; here, there were gains up to 7.9% in mean producer's and 11.6% in mean user's accuracy. This suggests that LIDAR can be useful for habitat mapping, when few data sources are available. The synergy between the LIDAR data, with multi-spectral bands, produced high accurate classifications (mean producer's accuracy: 92% for the 16 rocky habitats and 88% for the 11 wetland habitats). Fusion of the data enabled discrimination of intertidal communities, such as Corallina elongata

  6. Mapping the energy distribution of SERRS hot spots from anti-Stokes to Stokes intensity ratios.

    PubMed

    dos Santos, Diego P; Temperini, Marcia L A; Brolo, Alexandre G

    2012-08-15

    The anomalies in the anti-Stokes to Stokes intensity ratios in single-molecule surface-enhanced resonance Raman scattering were investigated. Brilliant green and crystal violet dyes were the molecular probes, and the experiments were carried out on an electrochemically activated Ag surface. The results allowed new insights into the origin of these anomalies and led to a new method to confirm the single-molecule regime in surface-enhanced Raman scattering. Moreover, a methodology to estimate the distribution of resonance energies that contributed to the imbalance in the anti-Stokes to Stokes intensity ratios at the electromagnetic hot spots was proposed. This method allowed the local plasmonic resonance energies on the metallic surface to be spatially mapped. PMID:22804227

  7. Dual gradients of light intensity and nutrient concentration for full-factorial mapping of photosynthetic productivity.

    PubMed

    Nguyen, Brian; Graham, Percival J; Sinton, David

    2016-08-01

    Optimizing bioproduct generation from microalgae is complicated by the myriad of coupled parameters affecting photosynthetic productivity. Quantifying the effect of multiple coupled parameters in full-factorial fashion requires a prohibitively high number of experiments. We present a simple hydrogel-based platform for the rapid, full-factorial mapping of light and nutrient availability on the growth and lipid accumulation of microalgae. We accomplish this without microfabrication using thin sheets of cell-laden hydrogels. By immobilizing the algae in a hydrogel matrix we are able to take full advantage of the continuous spatial chemical gradient produced by a diffusion-based gradient generator while eliminating the need for chambers. We map the effect of light intensities between 0 μmol m(-2) s(-1) and 130 μmol m(-2) s(-1) (∼28 W m(-2)) coupled with ammonium concentrations between 0 mM and 7 mM on Chlamydomonas reinhardtii. Our data set, verified with bulk experiments, clarifies the role of ammonium availability on the photosynthetic productivity Chlamydomonas reinhardtii, demonstrating the dependence of ammonium inhibition on light intensity. Specifically, a sharp optimal growth peak emerges at approximately 2 mM only for light intensities between 80 and 100 μmol m(-2) s(-1)- suggesting that ammonium inhibition is insignificant at lower light intensities. We speculate that this phenomenon is due to the regulation of the high affinity ammonium transport system in Chlamydomonas reinhardtii as well as free ammonia toxicity. The complexity of this photosynthetic biological response highlights the importance of full-factorial data sets as enabled here. PMID:27364571

  8. 3D leaf water content mapping using terrestrial laser scanner backscatter intensity with radiometric correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Wang, Tiejun; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Niemann, K. Olaf

    2015-12-01

    Leaf water content (LWC) plays an important role in agriculture and forestry management. It can be used to assess drought conditions and wildfire susceptibility. Terrestrial laser scanner (TLS) data have been widely used in forested environments for retrieving geometrically-based biophysical parameters. Recent studies have also shown the potential of using radiometric information (backscatter intensity) for estimating LWC. However, the usefulness of backscatter intensity data has been limited by leaf surface characteristics, and incidence angle effects. To explore the idea of using LiDAR intensity data to assess LWC we normalized (for both angular effects and leaf surface properties) shortwave infrared TLS data (1550 nm). A reflectance model describing both diffuse and specular reflectance was applied to remove strong specular backscatter intensity at a perpendicular angle. Leaves with different surface properties were collected from eight broadleaf plant species for modeling the relationship between LWC and backscatter intensity. Reference reflectors (Spectralon from Labsphere, Inc.) were used to build a look-up table to compensate for incidence angle effects. Results showed that before removing the specular influences, there was no significant correlation (R2 = 0.01, P > 0.05) between the backscatter intensity at a perpendicular angle and LWC. After the removal of the specular influences, a significant correlation emerged (R2 = 0.74, P < 0.05). The agreement between measured and TLS-derived LWC demonstrated a significant reduction of RMSE (root mean square error, from 0.008 to 0.003 g/cm2) after correcting for the incidence angle effect. We show that it is possible to use TLS to estimate LWC for selected broadleaved plants with an R2 of 0.76 (significance level α = 0.05) at leaf level. Further investigations of leaf surface and internal structure will likely result in improvements of 3D LWC mapping for studying physiology and ecology in vegetation.

  9. Squidpops: A Simple Tool to Crowdsource a Global Map of Marine Predation Intensity

    PubMed Central

    Duffy, J. Emmett; Ziegler, Shelby L.; Campbell, Justin E.; Bippus, Paige M.; Lefcheck, Jonathan S.

    2015-01-01

    We present a simple, standardized assay, the squidpop, for measuring the relative feeding intensity of generalist predators in aquatic systems. The assay consists of a 1.3-cm diameter disk of dried squid mantle tethered to a rod, which is either inserted in the sediment in soft-bottom habitats or secured to existing structure. Each replicate squidpop is scored as present or absent after 1 and 24 hours, and the data for analysis are proportions of replicate units consumed at each time. Tests in several habitats of the temperate southeastern USA (Virginia and North Carolina) and tropical Central America (Belize) confirmed the assay’s utility for measuring variation in predation intensity among habitats, among seasons, and along environmental gradients. In Belize, predation intensity varied strongly among habitats, with reef > seagrass = mangrove > unvegetated bare sand. Quantitative visual surveys confirmed that assayed feeding intensity increased with abundance and species richness of fishes across sites, with fish abundance and richness explaining up to 45% and 70% of the variation in bait loss respectively. In the southeastern USA, predation intensity varied seasonally, being highest during summer and declining in late autumn. Deployments in marsh habitats generally revealed a decline in mean predation intensity from fully marine to tidal freshwater sites. The simplicity, economy, and standardization of the squidpop assay should facilitate engagement of scientists and citizens alike, with the goal of constructing high-resolution maps of how top-down control varies through space and time in aquatic ecosystems, and addressing a broad array of long-standing hypotheses in macro- and community ecology. PMID:26599815

  10. Squidpops: A Simple Tool to Crowdsource a Global Map of Marine Predation Intensity.

    PubMed

    Duffy, J Emmett; Ziegler, Shelby L; Campbell, Justin E; Bippus, Paige M; Lefcheck, Jonathan S

    2015-01-01

    We present a simple, standardized assay, the squidpop, for measuring the relative feeding intensity of generalist predators in aquatic systems. The assay consists of a 1.3-cm diameter disk of dried squid mantle tethered to a rod, which is either inserted in the sediment in soft-bottom habitats or secured to existing structure. Each replicate squidpop is scored as present or absent after 1 and 24 hours, and the data for analysis are proportions of replicate units consumed at each time. Tests in several habitats of the temperate southeastern USA (Virginia and North Carolina) and tropical Central America (Belize) confirmed the assay's utility for measuring variation in predation intensity among habitats, among seasons, and along environmental gradients. In Belize, predation intensity varied strongly among habitats, with reef > seagrass = mangrove > unvegetated bare sand. Quantitative visual surveys confirmed that assayed feeding intensity increased with abundance and species richness of fishes across sites, with fish abundance and richness explaining up to 45% and 70% of the variation in bait loss respectively. In the southeastern USA, predation intensity varied seasonally, being highest during summer and declining in late autumn. Deployments in marsh habitats generally revealed a decline in mean predation intensity from fully marine to tidal freshwater sites. The simplicity, economy, and standardization of the squidpop assay should facilitate engagement of scientists and citizens alike, with the goal of constructing high-resolution maps of how top-down control varies through space and time in aquatic ecosystems, and addressing a broad array of long-standing hypotheses in macro- and community ecology. PMID:26599815

  11. Measuring Galaxy Clustering and the Evolution of [C II] Mean Intensity with Far-IR Line Intensity Mapping during 0.5 < z < 1.5

    NASA Astrophysics Data System (ADS)

    Uzgil, Bade; Aguirre, James E.; Bradford, Charles; Lidz, Adam

    2016-01-01

    Infrared fine-structure emission lines from trace metals are powerful diagnostics of the interstellar medium in galaxies. We explore the possibility of studying the redshifted far-IR fine-structure line emission using the three-dimensional (3D) power spectra obtained with an imaging spectrometer. The intensity mapping approach measures the spatio-spectral fluctuations due to line emission from all galaxies, including those below the individual detection threshold. The technique provides 3D measurements of galaxy clustering and moments of the galaxy luminosity function. Furthermore, the linear portion of the power spectrum can be used to measure the total line emission intensity including all sources through cosmic time with redshift information naturally encoded. As a case study, we consider measurement of [C II] autocorrelation in the 0.5 < z < 1.5 epoch, where interloper lines are minimized, using far-IR/submillimeter balloon-borne and future space-borne instruments with moderate and high sensitivity, respectively. In this context, we compare the intensity mapping approach to blind galaxy surveys based on individual detections. We find that intensity mapping is nearly always the best way to obtain the total line emission because blind, wide-field galaxy surveys lack sufficient depth and deep pencil beams do not observe enough galaxies in the requisite luminosity and redshift bins. Also, intensity mapping is often the most efficient way to measure the power spectrum shape, depending on the details of the luminosity function and the telescope aperture.

  12. Neural maps of interaural time and intensity differences in the optic tectum of the barn owl.

    PubMed

    Olsen, J F; Knudsen, E I; Esterly, S D

    1989-07-01

    This report describes the binaural basis of the auditory space map in the optic tectum of the barn owl (Tyto alba). Single units were recorded extracellularly in ketamine-anesthetized birds. Unit tuning for interaural differences in timing and intensity of wideband noise was measured using digitally synthesized sound presented through earphones. Spatial receptive fields of the same units were measured with a free field sound source. Auditory units in the optic tectum are sharply tuned for both the azimuth and the elevation of a free field sound source. To determine the binaural cues that could be responsible for this spatial tuning, we measured in the ear canals the amplitude and phase spectra produced by a free field noise source and calculated from these measurements the interaural differences in time and intensity associated with each of 178 locations throughout the frontal hemisphere. For all frequencies, interaural time differences (ITDs) varied systematically and most strongly with source azimuth. The pattern of variation of interaural intensity differences (IIDs) depended on frequency. For low frequencies (below 4 kHz) IID varied primarily with source azimuth, whereas for high frequencies (above 5 kHz) IID varied primarily with source elevation. Tectal units were tuned for interaural differences in both time and intensity of dichotic stimuli. Changing either parameter away from the best value for the unit decreased the unit's response. The tuning of units to either parameter was sharp: the width of ITD tuning curves, measured at 50% of the maximum response with IID held constant (50% tuning width), ranged from 18 to 82 microsecs. The 50% tuning widths of IID tuning curves, measured with ITD held constant, ranged from 8 to 37 dB. For most units, tuning for ITD was largely independent of IID, and vice versa. A few units exhibited systematic shifts of the best ITD with changes in IID (or shifts of the best IID with changes in ITD); for these units, a change in

  13. Use of genetic algorithms in the optimization of patch antennas and patch antenna arrays for the observation of the 21cm H-I line

    NASA Astrophysics Data System (ADS)

    Rispoli, Matthew N.

    Radio Astronomy allows for astrophysicists and astronomers to observe parts of the Universe outside of the visible spectrum. Within radio astronomy, the 21cm wavelength is a very popular choice for observation. The 21cm wavelength emission/absorption corresponds to transitions of neutral hydrogen electrons in their orbitals and is a very useful wavelength to observe due to the prevalence of neutral hydrogen gas throughout the Universe. However, due to the physical size of wavelengths in the radio spectrum, radio telescopes tend to be very large and therefore very expensive. This thesis uses evolutionary optimization algorithms to optimize the much cheaper and rugged micro-patch antennas in a phased array. The evolutionary algorithm optimizes the geometry of the micro-patch antenna and 2-D phased array parameters that will culminate in a single radio telescope. The micropatch antenna parameters to be optimized are the geometry of top metal patch, dielectric thickness, dielectric constant, and feed point. The array factor parameters that are optimized are the relative weights for each array element and their relative periodic spacing.

  14. Expected constraints on models of the epoch of reionization with the variance and skewness in redshifted 21 cm-line fluctuations

    NASA Astrophysics Data System (ADS)

    Kubota, Kenji; Yoshiura, Shintaro; Shimabukuro, Hayato; Takahashi, Keitaro

    2016-06-01

    The redshifted 21 cm-line signal from neutral hydrogen in the intergalactic medium (IGM) gives a direct probe of the epoch of reionization (EoR). In this paper, we investigate the potential of the variance and skewness of the probability distribution function of the 21 cm brightness temperature for constraining EoR models. These statistical quantities are simple, easy to calculate from the observed visibility, and thus suitable for the early exploration of the EoR with current telescopes such as the Murchison Widefield Array (MWA) and LOw Frequency ARray (LOFAR). We show, by performing Fisher analysis, that the variance and skewness at z = 7-9 are complementary to each other to constrain the EoR model parameters such as the minimum virial temperature of halos which host luminous objects, ionizing efficiency, and mean free path of ionizing photons in the IGM. Quantitatively, the constraining power highly depends on the quality of the foreground subtraction and calibration. We give a best case estimate of the constraints on the parameters, neglecting the systematics other than the thermal noise.

  15. A GREEN BANK TELESCOPE SURVEY FOR H I 21 cm ABSORPTION IN THE DISKS AND HALOS OF LOW-REDSHIFT GALAXIES

    SciTech Connect

    Borthakur, Sanchayeeta; Tripp, Todd M.; Yun, Min S.; Meiring, Joseph D.; Bowen, David V.; York, Donald G.; Momjian, Emmanuel

    2011-01-20

    We present an H I 21 cm absorption survey with the Green Bank Telescope (GBT) of galaxy-quasar pairs selected by combining galaxy data from the Sloan Digital Sky Survey (SDSS) and radio sources from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. Our sample consists of 23 sight lines through 15 low-redshift foreground galaxy-background quasar pairs with impact parameters ranging from 1.7 kpc up to 86.7 kpc. We detected one absorber in the GBT survey from the foreground dwarf galaxy, GQ1042+0747, at an impact parameter of 1.7 kpc and another possible absorber in our follow-up Very Large Array (VLA) imaging of the nearby foreground galaxy UGC 7408. The line widths of both absorbers are narrow (FWHM of 3.6 and 4.8km s{sup -1}). The absorbers have sub-damped Ly{alpha} column densities, and most likely originate in the disk gas of the foreground galaxies. We also detected H I emission from three foreground galaxies including UGC 7408. Although our sample contains both blue and red galaxies, the two H I absorbers as well as the H I emissions are associated with blue galaxies. We discuss the physical conditions in the 21 cm absorbers and some drawbacks of the large GBT beam for this type of survey.

  16. Expected constraints on models of the epoch of reionization with the variance and skewness in redshifted 21 cm-line fluctuations

    NASA Astrophysics Data System (ADS)

    Kubota, Kenji; Yoshiura, Shintaro; Shimabukuro, Hayato; Takahashi, Keitaro

    2016-08-01

    The redshifted 21 cm-line signal from neutral hydrogen in the intergalactic medium (IGM) gives a direct probe of the epoch of reionization (EoR). In this paper, we investigate the potential of the variance and skewness of the probability distribution function of the 21 cm brightness temperature for constraining EoR models. These statistical quantities are simple, easy to calculate from the observed visibility, and thus suitable for the early exploration of the EoR with current telescopes such as the Murchison Widefield Array (MWA) and LOw Frequency ARray (LOFAR). We show, by performing Fisher analysis, that the variance and skewness at z = 7-9 are complementary to each other to constrain the EoR model parameters such as the minimum virial temperature of halos which host luminous objects, ionizing efficiency, and mean free path of ionizing photons in the IGM. Quantitatively, the constraining power highly depends on the quality of the foreground subtraction and calibration. We give a best case estimate of the constraints on the parameters, neglecting the systematics other than the thermal noise.

  17. Predictions for BAO distance estimates from the cross-correlation of the Lyman-α forest and redshifted 21-cm emission

    SciTech Connect

    Sarkar, Tapomoy Guha; Bharadwaj, Somnath E-mail: somnath@phy.iitkgp.ernet.in

    2013-08-01

    We investigate the possibility of using the cross-correlation of the Lyman-α forest and redshifted 21-cm emission to detect the baryon acoustic oscillation (BAO). The standard Fisher matrix formalism is used to determine the accuracy with which it will be possible to measure cosmological distances using this signal. Earlier predictions [1] indicate that it will be possible to measure the dilation factor D{sub V} with 1.9% accuracy at z = 2.5 from the BOSS Lyman-α forest auto-correlation. In this paper we investigate if it is possible to improve the accuracy using the cross-correlation. We use a simple parametrization of the Lyman-α forest survey which very loosely matches some properties of the BOSS. For the redshifted 21-cm observations we consider a hypothetical radio interferometric array layout. It is assumed that the observations span z = 2 to 3 and covers the 10,000 deg{sup 2} sky coverage of BOSS. We find that it is possible to significantly increase the accuracy of the distance estimates by considering the cross-correlation signal.

  18. Intensity mapping cross-correlations: connecting the largest scales to galaxy evolution

    NASA Astrophysics Data System (ADS)

    Wolz, L.; Tonini, C.; Blake, C.; Wyithe, J. S. B.

    2016-05-01

    Intensity mapping of the neutral hydrogen (H I) is a new observational tool to efficiently map the large-scale structure over wide redshift ranges. The cross-correlation of intensity maps with galaxy surveys is a robust measure of the cosmological power spectrum and the H I content of galaxies which diminishes systematics caused by instrumental effects and foreground removal. We examine the cross-correlation signature at redshift 0.9 using a semi-analytical galaxy formation model in order to model the H I gas of galaxies as well as their optical magnitudes. We determine the scale-dependent clustering of the cross-correlation power for different types of galaxies determined by their colours, which act as a proxy for their star formation activity. We find that the cross-correlation coefficient with H I density for red quiescent galaxies falls off more quickly on smaller scales k > 0.2 h Mpc-1 than for blue star-forming galaxies. Additionally, we create a mock catalogue of highly star-forming galaxies to mimic the WiggleZ Dark Energy Survey, and use this to predict existing and future measurements using data from the Green Bank telescope and Parkes telescope. We find that the cross-power of highly star-forming galaxies shows a higher clustering on small scales than any other galaxy type and that this significantly alters the power spectrum shape on scales k > 0.2 h Mpc-1. We show that the cross-correlation coefficient is not negligible when interpreting the cosmological cross-power spectrum and additionally contains information about the H I content of the optically selected galaxies.

  19. Measuring galaxy clustering and the evolution of [C II] mean intensity with far-IR line intensity mapping during 0.5 < z < 1.5

    SciTech Connect

    Uzgil, B. D.; Aguirre, J. E.; Lidz, A.; Bradford, C. M.

    2014-10-01

    Infrared fine-structure emission lines from trace metals are powerful diagnostics of the interstellar medium in galaxies. We explore the possibility of studying the redshifted far-IR fine-structure line emission using the three-dimensional (3D) power spectra obtained with an imaging spectrometer. The intensity mapping approach measures the spatio-spectral fluctuations due to line emission from all galaxies, including those below the individual detection threshold. The technique provides 3D measurements of galaxy clustering and moments of the galaxy luminosity function. Furthermore, the linear portion of the power spectrum can be used to measure the total line emission intensity including all sources through cosmic time with redshift information naturally encoded. Total line emission, when compared to the total star formation activity and/or other line intensities, reveals evolution of the interstellar conditions of galaxies in aggregate. As a case study, we consider measurement of [C II] autocorrelation in the 0.5 < z < 1.5 epoch, where interloper lines are minimized, using far-IR/submillimeter balloon-borne and future space-borne instruments with moderate and high sensitivity, respectively. In this context, we compare the intensity mapping approach to blind galaxy surveys based on individual detections. We find that intensity mapping is nearly always the best way to obtain the total line emission because blind, wide-field galaxy surveys lack sufficient depth and deep pencil beams do not observe enough galaxies in the requisite luminosity and redshift bins. Also, intensity mapping is often the most efficient way to measure the power spectrum shape, depending on the details of the luminosity function and the telescope aperture.

  20. Dynamics of Hollow Atom Formation in Intense X-Ray Pulses Probed by Partial Covariance Mapping

    NASA Astrophysics Data System (ADS)

    Frasinski, L. J.; Zhaunerchyk, V.; Mucke, M.; Squibb, R. J.; Siano, M.; Eland, J. H. D.; Linusson, P.; v. d. Meulen, P.; Salén, P.; Thomas, R. D.; Larsson, M.; Foucar, L.; Ullrich, J.; Motomura, K.; Mondal, S.; Ueda, K.; Osipov, T.; Fang, L.; Murphy, B. F.; Berrah, N.; Bostedt, C.; Bozek, J. D.; Schorb, S.; Messerschmidt, M.; Glownia, J. M.; Cryan, J. P.; Coffee, R. N.; Takahashi, O.; Wada, S.; Piancastelli, M. N.; Richter, R.; Prince, K. C.; Feifel, R.

    2013-08-01

    When exposed to ultraintense x-radiation sources such as free electron lasers (FELs) the innermost electronic shell can efficiently be emptied, creating a transient hollow atom or molecule. Understanding the femtosecond dynamics of such systems is fundamental to achieving atomic resolution in flash diffraction imaging of noncrystallized complex biological samples. We demonstrate the capacity of a correlation method called “partial covariance mapping” to probe the electron dynamics of neon atoms exposed to intense 8 fs pulses of 1062 eV photons. A complete picture of ionization processes competing in hollow atom formation and decay is visualized with unprecedented ease and the map reveals hitherto unobserved nonlinear sequences of photoionization and Auger events. The technique is particularly well suited to the high counting rate inherent in FEL experiments.

  1. Comparing USGS national seismic hazard maps with internet-based macroseismic intensity observations

    NASA Astrophysics Data System (ADS)

    Mak, Sum; Schorlemmer, Danijel

    2016-04-01

    Verifying a nationwide seismic hazard assessment using data collected after the assessment has been made (i.e., prospective data) is a direct consistency check of the assessment. We directly compared the predicted rate of ground motion exceedance by the four available versions of the USGS national seismic hazard map (NSHMP, 1996, 2002, 2008, 2014) with the actual observed rate during 2000-2013. The data were prospective to the two earlier versions of NSHMP. We used two sets of somewhat independent data, namely 1) the USGS "Did You Feel It?" (DYFI) intensity reports, 2) instrumental ground motion records extracted from ShakeMap stations. Although both are observed data, they come in different degrees of accuracy. Our results indicated that for California, the predicted and observed hazards were very comparable. The two sets of data gave consistent results, implying robustness. The consistency also encourages the use of DYFI data for hazard verification in the Central and Eastern US (CEUS), where instrumental records are lacking. The result showed that the observed ground-motion exceedance was also consistent with the predicted in CEUS. The primary value of this study is to demonstrate the usefulness of DYFI data, originally designed for community communication instead of scientific analysis, for the purpose of hazard verification.

  2. Probing high-redshift galaxies with Lyα intensity mapping

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Ferrara, A.

    2016-01-01

    We present a study of the cosmological Lyα emission signal at z > 4. Our goal is to predict the power spectrum of the spatial fluctuations that could be observed by an intensity mapping survey. The model uses the latest data from the Hubble Space Telescope (HST) legacy fields and the abundance matching technique to associate UV emission and dust properties with the haloes, computing the emission from the interstellar medium (ISM) of galaxies and the intergalactic medium (IGM), including the effects of reionization, self-consistently. The Lyα intensity from the diffuse IGM emission is 1.3 (2.0) times more intense than the ISM emission at z = 4(7); both components are fair tracers of the star-forming galaxy distribution. However the power spectrum is dominated by ISM emission on small scales (k > 0.01 h Mpc-1) with shot noise being significant only above k = 1 h Mpc-1. At very large scales (k < 0.01 h Mpc-1) diffuse IGM emission becomes important. The comoving Lyα luminosity density from IGM and galaxies, dot{ρ }_{{Lyα } }^IGM = 8.73(6.51) {×} 10^{40} erg s^{-1 } Mpc^{-3} and dot{ρ }_{{Lyα } }^ISM = 6.62(3.21) {×} 10^{40} erg s^{-1} Mpc^{-3} at z = 4(7), is consistent with recent Sloan Digital Sky Survey determinations. We predict a power k3PLyα(k, z)/2π2 = 9.76 × 10-4(2.09 × 10-5)nW2m-4 sr-2 at z = 4(7) for k = 0.1 h Mpc-1.

  3. The use of multibeam backscatter intensity data as a tool for mapping glacial deposits in the Central North Sea, UK

    NASA Astrophysics Data System (ADS)

    Stewart, Heather; Bradwell, Tom

    2014-05-01

    Multibeam backscatter intensity data acquired offshore eastern Scotland and north-eastern England have been used to map drumlin fields, large arcuate moraine ridges, smaller scale moraine ridges, and incised channels on the sea floor. The study area includes the catchments of the previously proposed, but only partly mapped, Strathmore, Forth-Tay, and Tweed palaeo-ice streams. The ice sheet glacial landsystem is extremely well preserved on the sea bed and comprehensive mapping of the seafloor geomorphology has been undertaken. The authors demonstrate the value in utilising not only digital terrain models (both NEXTMap and multibeam bathymetry derived) in undertaking geomorphological mapping, but also examining the backscatter intensity data that is often overlooked. Backscatter intensity maps were generated using FM Geocoder by the British Geological Survey. FM Geocoder corrects the backscatter intensities registered by the multibeam echosounder system, and then geometrically corrects and positions each acoustic sample in a backscatter mosaic. The backscatter intensity data were gridded at the best resolution per dataset (between 2 and 5 m). The strength of the backscattering is dependent upon sediment type, grain size, survey conditions, sea-bed roughness, compaction and slope. A combination of manual interpretation and semi-automated classification of the backscatter intensity data (a predictive method for mapping variations in surficial sea-bed sediments) has been undertaken in the study area. The combination of the two methodologies has produced a robust glacial geomorphological map for the study area. Four separate drumlin fields have been mapped in the study area indicative of fast-flowing and persistent ice-sheet flow configurations. A number of individual drumlins are also identified located outside the fields. The drumlins show as areas of high backscatter intensity compared to the surrounding sea bed, indicating the drumlins comprise mixed sediments of

  4. Predicting the intensity mapping signal for multi-J CO lines

    NASA Astrophysics Data System (ADS)

    Mashian, Natalie; Sternberg, Amiel; Loeb, Abraham

    2015-11-01

    We present a novel approach to estimating the intensity mapping signal of any CO rotational line emitted during the Epoch of Reionization (EoR). Our approach is based on large velocity gradient (LVG) modeling, a radiative transfer modeling technique that generates the full CO spectral line energy distribution (SLED) for a specified gas kinetic temperature, volume density, velocity gradient, molecular abundance, and column density. These parameters, which drive the physics of CO transitions and ultimately dictate the shape and amplitude of the CO SLED, can be linked to the global properties of the host galaxy, mainly the star formation rate (SFR) and the SFR surface density. By further employing an empirically derived SFR-M relation for high redshift galaxies, we can express the LVG parameters, and thus the specific intensity of any CO rotational transition, as functions of the host halo mass M and redshift z. Integrating over the range of halo masses expected to host CO-luminous galaxies, we predict a mean CO(1-0) brightness temperature ranging from ~ 0.6 μK at z = 6 to ~ 0.03 μK at z = 10 with brightness temperature fluctuations of ΔCO2 ~ 0.1 and 0.005 μK respectively, at k = 0.1 Mpc-1. In this model, the CO emission signal remains strong for higher rotational levels at z = 6, with langle TCO rangle ~ 0.3 and 0.05 μK for the CO J = 6arrow5 and CO J = 10arrow9 transitions respectively. Including the effects of CO photodissociation in these molecular clouds, especially at low metallicities, results in the overall reduction in the amplitude of the CO signal, with the low- and high-J lines weakening by 2-20% and 10-45%, respectively, over the redshift range 4 < z < 10.

  5. Statistical mapping of maize bundle intensity at the stem scale using spatial normalisation of replicated images.

    PubMed

    Legland, David; Devaux, Marie-Françoise; Guillon, Fabienne

    2014-01-01

    The cellular structure of plant tissues is a key parameter for determining their properties. While the morphology of cells can easily be described, few studies focus on the spatial distribution of different types of tissues within an organ. As plants have various shapes and sizes, the integration of several individuals for statistical analysis of tissues distribution is a difficult problem. The aim of this study is to propose a method that quantifies the average spatial organisation of vascular bundles within maize stems, by integrating information from replicated images. In order to compare observations made on stems of different sizes and shapes, a spatial normalisation strategy was used. A model of average stem contour was computed from the digitisation of several stem slab images. Point patterns obtained from individual stem slices were projected onto the average stem to normalise them. Group-wise analysis of the spatial distribution of vascular bundles was applied on normalised data through the construction of average intensity maps. A quantitative description of average bundle organisation was obtained, via a 3D model of bundle distribution within a typical maize internode. The proposed method is generic and could easily be extended to other plant organs or organisms. PMID:24622152

  6. Statistical Mapping of Maize Bundle Intensity at the Stem Scale Using Spatial Normalisation of Replicated Images

    PubMed Central

    Legland, David; Devaux, Marie-Françoise; Guillon, Fabienne

    2014-01-01

    The cellular structure of plant tissues is a key parameter for determining their properties. While the morphology of cells can easily be described, few studies focus on the spatial distribution of different types of tissues within an organ. As plants have various shapes and sizes, the integration of several individuals for statistical analysis of tissues distribution is a difficult problem. The aim of this study is to propose a method that quantifies the average spatial organisation of vascular bundles within maize stems, by integrating information from replicated images. In order to compare observations made on stems of different sizes and shapes, a spatial normalisation strategy was used. A model of average stem contour was computed from the digitisation of several stem slab images. Point patterns obtained from individual stem slices were projected onto the average stem to normalise them. Group-wise analysis of the spatial distribution of vascular bundles was applied on normalised data through the construction of average intensity maps. A quantitative description of average bundle organisation was obtained, via a 3D model of bundle distribution within a typical maize internode. The proposed method is generic and could easily be extended to other plant organs or organisms. PMID:24622152

  7. Clustering of quintessence on horizon scales and its imprint on HI intensity mapping

    SciTech Connect

    Duniya, Didam G.A.; Bertacca, Daniele; Maartens, Roy E-mail: daniele.bertacca@gmail.com

    2013-10-01

    Quintessence can cluster only on horizon scales. What is the effect on the observed matter distribution? To answer this, we need a relativistic approach that goes beyond the standard Newtonian calculation and deals properly with large scales. Such an approach has recently been developed for the case when dark energy is vacuum energy, which does not cluster at all. We extend this relativistic analysis to deal with dynamical dark energy. Using three quintessence potentials as examples, we compute the angular power spectrum for the case of an HI intensity map survey. Compared to the concordance model with the same small-scale power at z = 0, quintessence boosts the angular power by up to ∼ 15% at high redshifts, while power in the two models converges at low redshifts. The difference is mainly due to the background evolution, driven mostly by the normalization of the power spectrum today. The dark energy perturbations make only a small contribution on the largest scales, and a negligible contribution on smaller scales. Ironically, the dark energy perturbations remove the false boost of large-scale power that arises if we impose the (unphysical) assumption that the dark energy is smooth.

  8. Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2016-08-01

    We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100. The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} mathrm {W}/mathrm {Hz}^{1/2}. This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.

  9. Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2015-11-01

    We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100 . The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} W/Hz^{1/2} . This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.

  10. Global statistical maps of extreme-event magnetic observatory 1 min first differences in horizontal intensity

    NASA Astrophysics Data System (ADS)

    Love, Jeffrey J.; Coïsson, Pierdavide; Pulkkinen, Antti

    2016-05-01

    Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity ΔBh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold (nT/min) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55°, we estimate once-per-century ΔBh, S, and R exceedances and a site-to-site, proportional, 1 standard deviation range [1 σ, lower and upper] to be, respectively, 1000, [690, 1450]; 500, [350, 720]; and 200, [140, 280] nT/min. At 40°, we estimate once-per-century ΔBh, S, and R exceedances and 1 σ values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nT/min.

  11. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, Alan M.; Edwards, William R.

    1983-01-01

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  12. Detection of magnetic field intensity gradient by homing pigeons (Columba livia) in a novel "virtual magnetic map" conditioning paradigm.

    PubMed

    Mora, Cordula V; Bingman, Verner P

    2013-01-01

    It has long been thought that birds may use the Earth's magnetic field not only as a compass for direction finding, but that it could also provide spatial information for position determination analogous to a map during navigation. Since magnetic field intensity varies systematically with latitude and theoretically could also provide longitudinal information during position determination, birds using a magnetic map should be able to discriminate magnetic field intensity cues in the laboratory. Here we demonstrate a novel behavioural paradigm requiring homing pigeons to identify the direction of a magnetic field intensity gradient in a "virtual magnetic map" during a spatial conditioning task. Not only were the pigeons able to detect the direction of the intensity gradient, but they were even able to discriminate upward versus downward movement on the gradient by differentiating between increasing and decreasing intensity values. Furthermore, the pigeons typically spent more than half of the 15 second sampling period in front of the feeder associated with the rewarded gradient direction indicating that they required only several seconds to make the correct choice. Our results therefore demonstrate for the first time that pigeons not only can detect the presence and absence of magnetic anomalies, as previous studies had shown, but are even able to detect and respond to changes in magnetic field intensity alone, including the directionality of such changes, in the context of spatial orientation within an experimental arena. This opens up the possibility for systematic and detailed studies of how pigeons could use magnetic intensity cues during position determination as well as how intensity is perceived and where it is processed in the brain. PMID:24039812

  13. Five Years of Citizen Science: Macroseismic Data Collection with the USGS Community Internet Intensity Maps (``Did You Feel It?'')

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Dewey, J. W.; Hopper, M.; Tarr, A.

    2003-12-01

    The U.S. Geological Survey Community Internet Intensity Map (CIIM) is an automatic Web-based system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. The data collection procedure is fundamentally Citizen Science. The vast majority of data are contributed by non-specialists, describing their own experiences of earthquakes. Internet data contributed by the public have profoundly changed the approach, coverage and usefulness of intensity observation in the U.S. We now typically receive thousands of individual questionnaire responses for widely felt earthquakes. After five years, these total over 350,000 individual entries nationwide, including entries from all 50 States, the District of Columbia, as well as territories of Guam, the Virgin Islands and Puerto Rico. The widespread access and use of online felt reports have added unanticipated but welcome capacities to USGS earthquake reporting. We can more easily validate earthquake occurrence in poorly instrumented regions, identify and locate sonic booms, and readily gauge societal importance of earthquakes by the nature of the response. In some parts of the U.S., CIIM provides constraints on earthquake magnitudes and focal depths beyond those provided by instrumental data, and the data are robust enough to test regionalized models of ground-motion attenuation. CIIM invokes an enthusiastic response from members of the public who contribute to it; it clearly provides an important opportunity for public education and outreach. In this paper we provide background on advantages and limitations of on-line data collection and explore recent developments and improvements to the CIIM system, including improved quality assurance using a relational database and greater data availability for scientific and sociological studies. We also describe a number of post-processing tools and applications that make use

  14. Pressure pain mapping of the wrist extensors after repeated eccentric exercise at high intensity.

    PubMed

    Delfa de la Morena, José M; Samani, Afshin; Fernández-Carnero, Josué; Hansen, Ernst A; Madeleine, Pascal

    2013-11-01

    The purpose of this study was to investigate adaptation mechanisms after 2 test rounds consisting of eccentric exercise using pressure pain imaging of the wrist extensors. Pressure pain thresholds (PPTs) were assessed over 12 points forming a 3 × 4 matrix over the dominant elbow in 12 participants. From the PPT assessments, pressure pain maps were computed. Delayed onset muscle soreness was induced in an initial test round of high-intensity eccentric exercise. The second test round performed 7 days later aimed at resulting in adaptation. The PPTs were assessed before, immediately after, and 24 hours after the 2 test rounds of eccentric exercise. For the first test round, the mean PPT was significantly lower 24 hours after exercise compared with before exercise (389.5 ± 64.1 vs. 500.5 ± 66.4 kPa, respectively; p = 0.02). For the second test round, the PPT was similar before and 24 hours after (447.7 ± 51.3 vs. 458.0 ± 73.1 kPa, respectively; p = 1.0). This study demonstrated adaptive effects of the wrist extensors monitored by pain imaging technique in healthy untrained humans. A lack of hyperalgesia, i.e., no decrease in PPT underlined adaptation after the second test round of eccentric exercise performed 7 days after the initial test round. The present findings showed for the first time that repeated eccentric exercise performed twice over 2 weeks protects the wrist extensor muscles from developing exacerbated pressure pain sensitivity. Thus, the addition of eccentric components to training regimens should be considered to induce protective adaptation. PMID:23442281

  15. 3.5 keV x rays as the "21 cm line" of dark atoms, and a link to light sterile neutrinos

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Liu, Zuowei; Moore, Guy D.; Farzan, Yasaman; Xue, Wei

    2014-06-01

    The recently discovered 3.5 keV x-ray line from extragalactic sources may be evidence of dark matter scatterings or decays. We show that dark atoms can be the source of the emission, through their hyperfine transitions, which would be the analog of 21 cm radiation from a dark sector. We identify two families of dark atom models that match the x-ray observations and are consistent with other constraints. In the first, the hyperfine excited state is long lived compared to the age of the Universe, and the dark atom mass is relatively unconstrained; dark atoms could be strongly self-interacting in this case. In the second, the excited state is short lived, and viable models are parametrized by the value of the dark proton-to-electron mass ratio R: for R =102-104, the dark atom mass is predicted to be in the range 350-1300 GeV, with fine structure constant α'≅0.1-0.6. In either class of models, the dark photon is expected to be massive with mγ'˜1 MeV and decay into e+e-. Evidence for the model could come from direct detection of the dark atoms. In a natural extension of this framework, the dark photon could decay predominantly into invisible particles, for example, ˜0.5 eV sterile neutrinos, explaining the extra radiation degree of freedom recently suggested by data from BICEP2, while remaining compatible with big bang nucleosynthesis.

  16. Mapping the spatial patterns of field traffic and traffic intensity to predict soil compaction risks at the field scale

    NASA Astrophysics Data System (ADS)

    Duttmann, Rainer; Kuhwald, Michael; Nolde, Michael

    2015-04-01

    Soil compaction is one of the main threats to cropland soils in present days. In contrast to easily visible phenomena of soil degradation, soil compaction, however, is obscured by other signals such as reduced crop yield, delayed crop growth, and the ponding of water, which makes it difficult to recognize and locate areas impacted by soil compaction directly. Although it is known that trafficking intensity is a key factor for soil compaction, until today only modest work has been concerned with the mapping of the spatially distributed patterns of field traffic and with the visual representation of the loads and pressures applied by farm traffic within single fields. A promising method for for spatial detection and mapping of soil compaction risks of individual fields is to process dGPS data, collected from vehicle-mounted GPS receivers and to compare the soil stress induced by farm machinery to the load bearing capacity derived from given soil map data. The application of position-based machinery data enables the mapping of vehicle movements over time as well as the assessment of trafficking intensity. It also facilitates the calculation of the trafficked area and the modeling of the loads and pressures applied to soil by individual vehicles. This paper focuses on the modeling and mapping of the spatial patterns of traffic intensity in silage maize fields during harvest, considering the spatio-temporal changes in wheel load and ground contact pressure along the loading sections. In addition to scenarios calculated for varying mechanical soil strengths, an example for visualizing the three-dimensional stress propagation inside the soil will be given, using the Visualization Toolkit (VTK) to construct 2D or 3D maps supporting to decision making due to sustainable field traffic management.

  17. SRS 2010 Vegetation Inventory GeoStatistical Mapping Results for Custom Reaction Intensity and Total Dead Fuels.

    SciTech Connect

    Edwards, Lloyd A.; Paresol, Bernard

    2014-09-01

    This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).

  18. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1982-03-23

    A long-lifetime light source is discussed with sufficiently low intensity to be used for reading a map or other writing at nightime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  19. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1983-10-11

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision is disclosed. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode. 1 fig.

  20. The USGS "Did You Feel It?" Macroseismic Intensity Maps: Lessons Learned from a Decade of Citizen-Empowered Seismology

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, C. B.; Quitoriano, V. R.; Dewey, J. W.

    2012-12-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity (MI) data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2012). The internet-based interface allows for a two-way path of communication between seismic data providers (scientists) and earthquake information recipients (citizens) by swapping roles: users looking for information from the USGS become data providers to the USGS. This role-reversal presents opportunities for data collection, generation of good will, and further communication and education. In addition, online MI collecting systems like DYFI have greatly expanded the range of quantitative analyses possible with MI data and taken the field of MI in important new directions. The maps are made more quickly, usually provide more complete coverage at higher resolution, and allow data collection at rates and quantities never before considered. Scrutiny of the USGS DYFI data indicates that one-decimal precision is warranted, and web-based geocoding services now permit precise locations. The high-quality, high-resolution, densely sampled MI assignments allow for peak ground motion (PGM) versus MI analyses well beyond earlier studies. For instance, Worden et al. (2011) used large volumes of data to confirm low standard deviations for multiple, proximal DYFI reports near a site, and they used the DYFI observations with PGM data to develop bidirectional, ground motion-intensity conversion equations. Likewise, Atkinson and Wald (2007) and Allen et al. (2012) utilized DYFI data to derive intensity prediction equations directly without intermediate conversion of ground-motion prediction equation metrics to intensity. Both types of relations are important for robust historic and real-time ShakeMaps, among other uses. In turn, ShakeMap and DYFI afford ample opportunities to

  1. First dose-map measured with a polycrystalline diamond 2D dosimeter under an intensity modulated radiotherapy beam

    NASA Astrophysics Data System (ADS)

    Scaringella, M.; Zani, M.; Baldi, A.; Bucciolini, M.; Pace, E.; de Sio, A.; Talamonti, C.; Bruzzi, M.

    2015-10-01

    A prototype of bidimensional dosimeter made on a 2.5×2.5 cm2 active area polycrystalline Chemical Vapour Deposited (pCVD) diamond film, equipped with a matrix of 12×12 contacts connected to the read-out electronics, has been used to evaluate a map of dose under Intensity Modulated Radiation Therapy (IMRT) fields for a possible application in pre-treatment verifications of cancer treatments. Tests have been performed under a 6-10 MVRX beams with IMRT fields for prostate and breast cancer. Measurements have been taken by measuring the 144 pixels in different positions, obtained by shifting the device along the x/y axes to span a total map of 14.4×10 cm2. Results show that absorbed doses measured by our pCVD diamond device are consistent with those calculated by the Treatment Planning System (TPS).

  2. Adaptive Analytic Mapping Procedures for Simple and Accurate Calculation of Scattering Lengths and Photoassociation Absorption Intensities

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Meshkov, Vladimir V.; Stolyarov, Andrej V.

    2009-06-01

    We have shown that one and two-parameter analytical mapping functions such as r(y;bar{r}, α)=bar{r}[1 + {1}/{α} tan(π y/2)] and r(y;bar{r})=bar{r} [ {1+ y}/{1-y} ] transform the conventional radial Schrödinger equation into equivalent alternate forms {d^2φ(y)}/{dy^2} = [{π^2}/{4}+({2μ}/ {hbar^2} ) g^2(y) [E - U(r(y))

  3. A special kind of local structure in the CMB intensity maps: duel peak structure

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Ti-Pei

    2009-03-01

    We study the local structure of Cosmic Microwave Background (CMB) temperature maps released by the Wilkinson Microwave Anisotropy Probe (WMAP) team, and find a new kind of structure, which can be described as follows: a peak (or valley) of average temperature is often followed by a peak of temperature fluctuation that is 4° away. This structure is important for the following reasons: both the well known cold spot detected by Cruz et al. and the hot spot detected by Vielva et al. with the same technology (the third spot in their article) have such structure; more spots that are similar to them can be found on CMB maps and they also tend to be significant cold/hot spots; if we change the 4° characteristic into an artificial one, such as 3° or 5°, there will be less 'similar spots', and the temperature peaks or valleys will be less significant. The presented 'similar spots' have passed a strict consistency test which requires them to be significant on at least three different CMB temperature maps. We hope that this article could arouse some interest in the relationship of average temperature with temperature fluctuation in local areas; meanwhile, we are also trying to find an explanation for it which might be important to CMB observation and theory.

  4. From Recollisions to the Knee: A Road Map for Double Ionization in Intense Laser Fields

    SciTech Connect

    Mauger, F.; Chandre, C.; Uzer, T.

    2010-01-29

    We examine the nature and statistical properties of electron-electron collisions in the recollision process in a strong laser field. The separation of the double ionization yield into sequential and nonsequential components leads to a bell-shaped curve for the nonsequential probability and a monotonically rising one for the sequential process. We identify key features of the nonsequential process and connect our findings in a simplified model which reproduces the knee shape for the probability of double ionization with laser intensity and associated trends.

  5. Dynamic T{sub 2}-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    SciTech Connect

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-28

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1 Degree-Sign C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T{sub 2}, since T{sub 2} increases linearly in fat during heating. T{sub 2}-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T{sub 2}. Calibration of T{sub 2}-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T{sub 2} and temperature with a thermocouple. A positive T{sub 2} temperature dependence in bone marrow of 20 ms/ Degree-Sign C was observed. Dynamic T{sub 2}-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  6. Near-Surface Geophysical Mapping of the Hydrological Response to an Intense Rainfall Event at the Field Scale

    NASA Astrophysics Data System (ADS)

    Martínez, G.; Vanderlinden, K.; Giraldez, J. V.; Espejo, A. J.; Muriel, J. L.

    2009-12-01

    Soil moisture plays an important role in a wide variety of biogeochemical fluxes in the soil-plant-atmosphere system and governs the (eco)hydrological response of a catchment to an external forcing such as rainfall. Near-surface electromagnetic induction (EMI) sensors that measure the soil apparent electrical conductivity (ECa) provide a fast and non-invasive means for characterizing this response at the field or catchment scale through high-resolution time-lapse mapping. Here we show how ECa maps, obtained before and after an intense rainfall event of 125 mm h-1, elucidate differences in soil moisture patterns and hydrologic response of an experimental field as a consequence of differed soil management. The dryland field (Vertisol) was located in SW Spain and cropped with a typical wheat-sunflower-legume rotation. Both, near-surface and subsurface ECa (ECas and ECad, respectively), were measured using the EM38-DD EMI sensor in a mobile configuration. Raw ECa measurements and Mean Relative Differences (MRD) provided information on soil moisture patterns while time-lapse maps were used to evaluate the hydrologic response of the field. ECa maps of the field, measured before and after the rainfall event showed similar patterns. The field depressions where most of water and sediments accumulated had the highest ECa and MRD values. The SE-oriented soil, which was deeper and more exposed to sun and wind, showed the lowest ECa and MRD. The largest differences raised in the central part of the field where a high ECa and MRD area appeared after the rainfall event as a consequence of the smaller soil depth and a possible subsurface flux concentration. Time-lapse maps of both ECa and MRD were also similar. The direct drill plots showed higher increments of ECa and MRD as a result of the smaller runoff production. Time-lapse ECa increments showed a bimodal distribution differentiating clearly the direct drill from the conventional and minimum tillage plots. However this kind

  7. Correlation mapping: rapid method for retrieving microcirculation morphology from optical coherence tomography intensity images

    NASA Astrophysics Data System (ADS)

    Jonathan, E.; Enfield, J.; Leahy, M. J.

    2011-03-01

    The microcirculation plays a critical role is maintaining organ health and function by serving as a vascular are where trophic metabolism exchanges between blood and tissue takes place. To facilitate regular assessment in vivo, noninvasive microcirculation imagers are required in clinics. Among this group of clinical devices, are those that render microcirculation morphology such as nailfold capillaroscopy, a common device for early diagnosis and monitoring of microangiopathies. However, depth ambiguity disqualify this and other similar techniques in medical tomography where due to the 3-D nature of biological organs, imagers that support depth-resolved 2-D imaging and 3-D image reconstruction are required. Here, we introduce correlation map OCT (cmOCT), a promising technique for microcirculation morphology imaging that combines standard optical coherence tomography and an agile imaging analysis software based on correlation statistic. Promising results are presented of the microcirculation morphology images of the brain region of a small animal model as well as measurements of vessel geometry at bifurcations, such as vessel diameters, branch angles. These data will be useful for obtaining cardiovascular related characteristics such as volumetric flow, velocity profile and vessel-wall shear stress for circulatory and respiratory system.

  8. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  9. Temperature trends and Urban Heat Island intensity mapping of the Las Vegas valley area

    NASA Astrophysics Data System (ADS)

    Black, Adam Leland

    Modified urban climate regions that are warmer than rural areas at night are referred to as Urban Heat Islands or UHI. Islands of warmer air over a city can be 12 degrees Celsius greater than the surrounding cooler air. The exponential growth in Las Vegas for the last two decades provides an opportunity to detect gradual temperature changes influenced by an increasing presence of urban materials. This thesis compares ground based thermometric observations and satellite based remote sensing temperature observations to identify temperature trends and UHI areas caused by urban development. Analysis of temperature trends between 2000 and 2010 at ground weather stations has revealed a general cooling trend in the Las Vegas region. Results show that urban development accompanied by increased vegetation has a cooling effect in arid climates. Analysis of long term temperature trends at McCarran and Nellis weather stations show 2.4 K and 1.2 K rise in temperature over the last 60 years. The ground weather station temperature data is related to the land surface temperature images from the Landsat Thematic Mapper to estimate and evaluate urban heat island intensity for Las Vegas. Results show that spatial and temporal trends of temperature are related to the gradual change in urban landcover. UHI are mainly observed at the airport and in the industrial areas. This research provides useful insight into the temporal behavior of the Las Vegas area.

  10. Maps showing petroleum exploration intensity and production in major Cambrian to Ordovician reservoir rocks in the Anadarko Basin

    USGS Publications Warehouse

    Henry, Mitch; Hester, Tim

    1996-01-01

    The Anadarko basin is a large, deep, two-stage Paleozoic basin (Feinstein, 1981) that is petroleum rich and generally well explored. The Anadarko basin province, a geogrphic area used here mostly for the convenience of mapping and data management, is defined by political boundaries that include the Anadarko basin proper. The boundaries of the province are identical to those used by the U.S. Geological Survey (USGS) in the 1995 National Assessment of United Stated Oil and Gas Resources. The data in this report, also identical to those used in the national assessment, are from several computerized data bases including Nehring Research Group (NRG) Associates Inc., Significant Oil and Gas Fields of the United States (1992); Petroleum Information (PI), Inc., Well History Control System (1991); and Petroleum Information (PI), Inc., Petro-ROM: Production data on CD-ROM (1993). Although generated mostly in response to the national assessment, the data presented here arc grouped differently and arc displayed and described in greater detail. In addition, the stratigraphic sequences discussed may not necessarily correlate with the "plays" of the 1995 national assessment. This report uses computer-generated maps to show drilling intensity, producing wells, major fields, and other geologic information relevant to petroleum exploration and production in the lower Paleozoic part of the Anadarko basin province as defined for the U.S. Geological Survey's 1995 national petroleum assessment. Hydrocarbon accumulations must meet a minimum standard of 1 million barrels of oil (MMBO) or 6 billion cubic feet of gas (BCFG) estimated ultimate recovery to be included in this report as a major field or revoir. Mapped strata in this report include the Upper Cambrian to Lower Ordovician Arbuckle and Low Ordovician Ellenburger Groups, the Middle Ordovician Simpson Group, and the Middle to Upper Ordovician Viola Group.

  11. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the

  12. Fiber-bundle microendoscopy with sub-diffuse reflectance spectroscopy and intensity mapping for multimodal optical biopsy of stratified epithelium

    PubMed Central

    Greening, Gage J.; James, Haley M.; Powless, Amy J.; Hutcheson, Joshua A.; Dierks, Mary K.; Rajaram, Narasimhan; Muldoon, Timothy J.

    2015-01-01

    Early detection of structural or functional changes in dysplastic epithelia may be crucial for improving long-term patient care. Recent work has explored myriad non-invasive or minimally invasive “optical biopsy” techniques for diagnosing early dysplasia, such as high-resolution microendoscopy, a method to resolve sub-cellular features of apical epithelia, as well as broadband sub-diffuse reflectance spectroscopy, a method that evaluates bulk health of a small volume of tissue. We present a multimodal fiber-based microendoscopy technique that combines high-resolution microendoscopy, broadband (450-750 nm) sub-diffuse reflectance spectroscopy (sDRS) at two discrete source-detector separations (374 and 730 μm), and sub-diffuse reflectance intensity mapping (sDRIM) using a 635 nm laser. Spatial resolution, magnification, field-of-view, and sampling frequency were determined. Additionally, the ability of the sDRS modality to extract optical properties over a range of depths is reported. Following this, proof-of-concept experiments were performed on tissue-simulating phantoms made with poly(dimethysiloxane) as a substrate material with cultured MDA-MB-468 cells. Then, all modalities were demonstrated on a human melanocytic nevus from a healthy volunteer and on resected colonic tissue from a murine model. Qualitative in vivo image data is correlated with reduced scattering and absorption coefficients. PMID:26713207

  13. Looking for Dark Galaxies at 21-cm

    NASA Astrophysics Data System (ADS)

    Disney, mike; Lang, Robert. Hugh

    2012-10-01

    Blind HI surveys have so far failed to find the Dark and Low Surface Brightness Galaxies, and the Intergalactic Gas Clouds which were widely expected. It now appears very likely that this has been caused through incorrectly identifying many sources with clustered visible galaxies in the same groups. We aim to rectify this situation by using ATCA to find interferometric positions accurate to ~ 1 arc minute for a selection of the most unlikely identifications in the HIPASS catalogue and so either to find such objects, or conclusively rule out their existence.

  14. Mapping seismic intensity using twitter data; A Case study: The February 26th, 2014 M5.9 Kefallinia (Greece) earthquake

    NASA Astrophysics Data System (ADS)

    Arapostathis, Stathis; Parcharidis, Isaak; Kalogeras, Ioannis; Drakatos, George

    2015-04-01

    In this paper we present an innovative approach for the development of seismic intensity maps in minimum time frame. As case study, a recent earthquake that occurred in Western Greece (Kefallinia Island, on February 26, 2014) is used. The magnitude of the earthquake was M=5.9 (Institute of Geodynamics - National Observatory of Athens). Earthquake's effects comprising damages in property and changes of the physical environment in the area. The innovative part of this research is that we use crowdsourcing as a source to assess macroseismic intensity information, coming out from twitter content. Twitter as a social media service with micro-blogging characteristics, a semantic structure which allows the storage of spatial content, and a high volume production of user generated content is a suitable source to obtain and extract knowledge related to macroseismic intensity in different geographic areas and in short time periods. Moreover the speed in which twitter content is generated affects us to have accurate results only a few hours after the occurrence of the earthquake. The method used in order to extract, evaluate and map the intensity related information is described in brief in this paper. At first, we pick out all the tweets that have been posted within the first 48 hours, including information related to intensity and refer to a geographic location. The geo-referencing of these tweets and their association with an intensity grade according to the European Macroseismic Scale (EMS98) based on the information they contain in text followed. Finally, we apply various spatial statistics and GIS methods, and we interpolate the values to cover all the appropriate geographic areas. The final output contains macroseismic intensity maps for the Lixouri area (Kefallinia Island), produced from twitter data that have been posted in the first six, twelve, twenty four and forty eight hours after the earthquake occurrence. Results are compared with other intensity maps for same

  15. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  16. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  17. VizieR Online Data Catalog: CMB intensity map from WMAP and Planck PR2 data (Bobin+, 2016)

    NASA Astrophysics Data System (ADS)

    Bobin, J.; Sureau, F.; Starck, J.-L.

    2016-05-01

    This paper presents a novel estimation of the CMB map reconstructed from the Planck 2015 data (PR2) and the WMAP nine-year data (Bennett et al., 2013ApJS..208...20B), which updates the CMB map we published in (Bobin et al., 2014A&A...563A.105B). This new map is based on the sparse component separation method L-GMCA (Bobin et al., 2013A&A...550A..73B). Additionally, the map benefits from the latest advances in this field (Bobin et al., 2015, IEEE Transactions on Signal Processing, 63, 1199), which allows us to accurately discriminate between correlated components. In this update to our previous work, we show that this new map presents significant improvements with respect to the available CMB map estimates. (3 data files).

  18. Sea floor maps showing topography, sun-illuminated topography, and backscatter intensity of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, P.C.; Middleton, T.J.; Fuller, S.J.

    2000-01-01

    This data set contains the sea floor topographic contours, sun-illuminated topographic imagery, and backscatter intensity generated from a multibeam sonar survey of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts, an area of approximately 1100 square nautical miles. The Stellwagen Bank NMS Mapping Project is designed to provide detailed maps of the Stellwagen Bank region's environments and habitats and the first complete multibeam topographic and sea floor characterization maps of a significant region of the shallow EEZ. Data were collected on four cruises over a two year period from the fall of 1994 to the fall of 1996. The surveys were conducted aboard the Candian Hydrographic Service vessel Frederick G. Creed, a SWATH (Small Waterplane Twin Hull) ship that surveys at speeds of 16 knots. The multibeam data were collected utilizing a Simrad Subsea EM 1000 Multibeam Echo Sounder (95 kHz) that is permanently installed in the hull of the Creed.

  19. Update on the mapping of prevalence and intensity of infection for soil-transmitted helminth infections in Latin America and the Caribbean: a call for action.

    PubMed

    Saboyá, Martha Idalí; Catalá, Laura; Nicholls, Rubén Santiago; Ault, Steven Kenyon

    2013-01-01

    It is estimated that in Latin America and the Caribbean (LAC) at least 13.9 million preschool age and 35.4 million school age children are at risk of infections by soil-transmitted helminths (STH): Ascaris lumbricoides, Trichuris trichiura and hookworms (Necator americanus and Ancylostoma duodenale). Although infections caused by this group of parasites are associated with chronic deleterious effects on nutrition and growth, iron and vitamin A status and cognitive development in children, few countries in the LAC Region have implemented nationwide surveys on prevalence and intensity of infection. The aim of this study was to identify gaps on the mapping of prevalence and intensity of STH infections based on data published between 2000 and 2010 in LAC, and to call for including mapping as part of action plans against these infections. A total of 335 published data points for STH prevalence were found for 18 countries (11.9% data points for preschool age children, 56.7% for school age children and 31.3% for children from 1 to 14 years of age). We found that 62.7% of data points showed prevalence levels above 20%. Data on the intensity of infection were found for seven countries. The analysis also highlights that there is still an important lack of data on prevalence and intensity of infection to determine the burden of disease based on epidemiological surveys, particularly among preschool age children. This situation is a challenge for LAC given that adequate planning of interventions such as deworming requires information on prevalence to determine the frequency of needed anthelmintic drug administration and to conduct monitoring and evaluation of progress in drug coverage. PMID:24069476

  20. Update on the Mapping of Prevalence and Intensity of Infection for Soil-Transmitted Helminth Infections in Latin America and the Caribbean: A Call for Action

    PubMed Central

    Saboyá, Martha Idalí; Catalá, Laura; Nicholls, Rubén Santiago; Ault, Steven Kenyon

    2013-01-01

    It is estimated that in Latin America and the Caribbean (LAC) at least 13.9 million preschool age and 35.4 million school age children are at risk of infections by soil-transmitted helminths (STH): Ascaris lumbricoides, Trichuris trichiura and hookworms (Necator americanus and Ancylostoma duodenale). Although infections caused by this group of parasites are associated with chronic deleterious effects on nutrition and growth, iron and vitamin A status and cognitive development in children, few countries in the LAC Region have implemented nationwide surveys on prevalence and intensity of infection. The aim of this study was to identify gaps on the mapping of prevalence and intensity of STH infections based on data published between 2000 and 2010 in LAC, and to call for including mapping as part of action plans against these infections. A total of 335 published data points for STH prevalence were found for 18 countries (11.9% data points for preschool age children, 56.7% for school age children and 31.3% for children from 1 to 14 years of age). We found that 62.7% of data points showed prevalence levels above 20%. Data on the intensity of infection were found for seven countries. The analysis also highlights that there is still an important lack of data on prevalence and intensity of infection to determine the burden of disease based on epidemiological surveys, particularly among preschool age children. This situation is a challenge for LAC given that adequate planning of interventions such as deworming requires information on prevalence to determine the frequency of needed anthelmintic drug administration and to conduct monitoring and evaluation of progress in drug coverage. PMID:24069476

  1. Time Courses of Changes in Phospho- and Total- MAP Kinases in the Cochlea after Intense Noise Exposure

    PubMed Central

    Maeda, Yukihide; Fukushima, Kunihiro; Omichi, Ryotaro; Kariya, Shin; Nishizaki, Kazunori

    2013-01-01

    Mitogen-activated protein kinases (MAP kinases) are intracellular signaling kinases activated by phosphorylation in response to a variety of extracellular stimuli. Mammalian MAP kinase pathways are composed of three major pathways: MEK1 (mitogen-activated protein kinase kinase 1)/ERK 1/2 (extracellular signal-regulated kinases 1/2)/p90 RSK (p90 ribosomal S6 kinase), JNK (c-Jun amino (N)-terminal kinase)/c-Jun, and p38 MAPK pathways. These pathways coordinately mediate physiological processes such as cell survival, protein synthesis, cell proliferation, growth, migration, and apoptosis. The involvement of MAP kinase in noise-induced hearing loss (NIHL) has been implicated in the cochlea; however, it is unknown how expression levels of MAP kinase change after the onset of NIHL and whether they are regulated by transient phosphorylation or protein synthesis. CBA/J mice were exposed to 120-dB octave band noise for 2 h. Auditory brainstem response confirmed a component of temporary threshold shift within 0–24 h and significant permanent threshold shift at 14 days after noise exposure. Levels and localizations of phospho- and total- MEK1/ERK1/2/p90 RSK, JNK/c-Jun, and p38 MAPK were comprehensively analyzed by the Bio-Plex® Suspension Array System and immunohistochemistry at 0, 3, 6, 12, 24 and 48 h after noise exposure. The phospho-MEK1/ERK1/2/p90 RSK signaling pathway was activated in the spiral ligament and the sensory and supporting cells of the organ of Corti, with peaks at 3–6 h and independently of regulations of total-MEK1/ERK1/2/p90 RSK. The expression of phospho-JNK and p38 MAPK showed late upregulation in spiral neurons at 48 h, in addition to early upregulations with peaks at 3 h after noise trauma. Phospho-p38 MAPK activation was dependent on upregulation of total-p38 MAPK. At present, comprehensive data on MAP kinase expression provide significant insight into understanding the molecular mechanism of NIHL, and for developing therapeutic models for acute

  2. Intense acoustic burst ultrasound modulated optical tomography for elasticity mapping of soft biological tissue mimicking phantom: a laser speckle contrast analysis study

    NASA Astrophysics Data System (ADS)

    Singh, M. Suheshkumar; Rajan, K.; Vasu, R. M.

    2014-03-01

    This report addresses the assessment of variation in elastic property of soft biological tissues non-invasively using laser speckle contrast measurement. The experimental as well as the numerical (Monte-Carlo simulation) studies are carried out. In this an intense acoustic burst of ultrasound (an acoustic pulse with high power within standard safety limits), instead of continuous wave, is employed to induce large modulation of the tissue materials in the ultrasound insonified region of interest (ROI) and it results to enhance the strength of the ultrasound modulated optical signal in ultrasound modulated optical tomography (UMOT) system. The intensity fluctuation of speckle patterns formed by interference of light scattered (while traversing through tissue medium) is characterized by the motion of scattering sites. The displacement of scattering particles is inversely related to the elastic property of the tissue. We study the feasibility of laser speckle contrast analysis (LSCA) technique to reconstruct a map of the elastic property of a soft tissue-mimicking phantom. We employ source synchronized parallel speckle detection scheme to (experimentally) measure the speckle contrast from the light traversing through ultrasound (US) insonified tissue-mimicking phantom. The measured relative image contrast (the ratio of the difference of the maximum and the minimum values to the maximum value) for intense acoustic burst is 86.44 % in comparison to 67.28 % for continuous wave excitation of ultrasound. We also present 1-D and 2-D image of speckle contrast which is the representative of elastic property distribution.

  3. Spatial-temporal three-dimensional ultrasound plane-by-plane active cavitation mapping for high-intensity focused ultrasound in free field and pulsatile flow.

    PubMed

    Ding, Ting; Hu, Hong; Bai, Chen; Guo, Shifang; Yang, Miao; Wang, Supin; Wan, Mingxi

    2016-07-01

    Cavitation plays important roles in almost all high-intensity focused ultrasound (HIFU) applications. However, current two-dimensional (2D) cavitation mapping could only provide cavitation activity in one plane. This study proposed a three-dimensional (3D) ultrasound plane-by-plane active cavitation mapping (3D-UPACM) for HIFU in free field and pulsatile flow. The acquisition of channel-domain raw radio-frequency (RF) data in 3D space was performed by sequential plane-by-plane 2D ultrafast active cavitation mapping. Between two adjacent unit locations, there was a waiting time to make cavitation nuclei distribution of the liquid back to the original state. The 3D cavitation map equivalent to the one detected at one time and over the entire volume could be reconstructed by Marching Cube algorithm. Minimum variance (MV) adaptive beamforming was combined with coherence factor (CF) weighting (MVCF) or compressive sensing (CS) method (MVCS) to process the raw RF data for improved beamforming or more rapid data processing. The feasibility of 3D-UPACM was demonstrated in tap-water and a phantom vessel with pulsatile flow. The time interval between temporal evolutions of cavitation bubble cloud could be several microseconds. MVCF beamformer had a signal-to-noise ratio (SNR) at 14.17dB higher, lateral and axial resolution at 2.88times and 1.88times, respectively, which were compared with those of B-mode active cavitation mapping. MVCS beamformer had only 14.94% time penalty of that of MVCF beamformer. This 3D-UPACM technique employs the linear array of a current ultrasound diagnosis system rather than a 2D array transducer to decrease the cost of the instrument. Moreover, although the application is limited by the requirement for a gassy fluid medium or a constant supply of new cavitation nuclei that allows replenishment of nuclei between HIFU exposures, this technique may exhibit a useful tool in 3D cavitation mapping for HIFU with high speed, precision and resolution

  4. Evaluation of Ground-Motion Modeling Techniques for Use in Global ShakeMap - A Critique of Instrumental Ground-Motion Prediction Equations, Peak Ground Motion to Macroseismic Intensity Conversions, and Macroseismic Intensity Predictions in Different Tectonic Settings

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.

    2009-01-01

    Regional differences in ground-motion attenuation have long been thought to add uncertainty in the prediction of ground motion. However, a growing body of evidence suggests that regional differences in ground-motion attenuation may not be as significant as previously thought and that the key differences between regions may be a consequence of limitations in ground-motion datasets over incomplete magnitude and distance ranges. Undoubtedly, regional differences in attenuation can exist owing to differences in crustal structure and tectonic setting, and these can contribute to differences in ground-motion attenuation at larger source-receiver distances. Herein, we examine the use of a variety of techniques for the prediction of several ground-motion metrics (peak ground acceleration and velocity, response spectral ordinates, and macroseismic intensity) and compare them against a global dataset of instrumental ground-motion recordings and intensity assignments. The primary goal of this study is to determine whether existing ground-motion prediction techniques are applicable for use in the U.S. Geological Survey's Global ShakeMap and Prompt Assessment of Global Earthquakes for Response (PAGER). We seek the most appropriate ground-motion predictive technique, or techniques, for each of the tectonic regimes considered: shallow active crust, subduction zone, and stable continental region.

  5. Advances In Cryogenic Monolithic Millimeter-wave Integrated Circuit (MMIC) Low Noise Amplifiers For CO Intensity Mapping and ALMA Band 2

    NASA Astrophysics Data System (ADS)

    Samoska, Lorene; Cleary, Kieran; Church, Sarah E.; Cuadrado-Calle, David; Fung, Andy; gaier, todd; gawande, rohit; Kangaslahti, Pekka; Lai, Richard; Lawrence, Charles R.; Readhead, Anthony C. S.; Sarkozy, Stephen; Seiffert, Michael D.; Sieth, Matthew

    2016-01-01

    We will present results of the latest InP HEMT MMIC low noise amplifiers in the 30-300 GHz range, with emphasis on LNAs and mixers developed for CO intensity mapping in the 40-80 GHz range, as well as MMIC LNAs suitable for ALMA Band 2 (67-90 GHz). The LNAs have been developed together with NGC in a 35 nm InP HEMT MMIC process. Recent results and a summary of best InP low noise amplifier data will be presented. This work describes technologies related to the detection and study of highly redshifted spectral lines from the CO molecule, a key tracer for molecular hydrogen. One of the most promising techniques for observing the Cosmic Dawn is intensity mapping of spectral-spatial fluctuations of line emission from neutral hydrogen (H I), CO, and [C II]. The essential idea is that instead of trying to detect line emission from individual galaxies, one measures the total line emission from a number of galaxies within the volume defined by a spectral-spatial pixel. Fluctuations from pixel to pixel trace large scale structure, and the evolution with redshift is revealed as a function of receiver frequency. A special feature of CO is the existence of multiple lines with a well-defined frequency relationship from the rotational ladder, which allows the possibility of cleanly separating the signal from other lines or foreground structure at other redshifts. Making use of this feature (not available to either HI or [C II] measurements) requires observing multiple frequencies, including the range 40-80 GHz, much of which is inaccessible from the ground or balloons.Specifically, the J=1->0 transition frequency is 115 GHz; J=2->1 is 230 GHz; J=3->2 is 345 GHz, etc. At redshift 7, these lines would appear at 14.4, 28.8, and 43.2 GHz, accessible from the ground. Over a wider range of redshifts, from 3 to 7, these lines would appear at frequencies from 14 to 86 GHz. A ground-based CO Intensity mapping experiment, COMAP, will utilize InP-based HEMT MMIC amplifier front ends in the

  6. dispel4py : An Open Source Python Framework for Encoding, Mapping and Reusing Seismic Continuous Data Streams: Intensive Analysis and Data Mining.

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Krause, A.; Atkinson, M.; Spinuso, A.; Klampanos, I.; Magnoni, F.; Casarotti, E.; Vilotte, J. P.

    2015-12-01

    Scientific workflows are needed by many scientific communities, such as seismology, as they enable easy composition and execution of applications, enabling scientists to focus on their research without being distracted by arranging computation and data management. However, there are challenges to be addressed. In many systems users have to adapt their codes and data movement as they change from one HPC-architecture to another. They still need to be aware of the computing architectures available for achieving the best application performance. We present dispel4py, an open-source framework presented as a Python library for encoding and automating data-intensive scientific methods as a graph of operations coupled together by data-streams. It enables scientists to develop and experiment with their own data-intensive applications using their familiar work environment. These are then automatically mapped to a variety of HPC-architectures, i.e., MPI, multiprocessing, Storm and Spark frameworks, increasing the chances to reuse their applications in different computing resources. dispel4py comes with data provenance, as shown in the screenshot, and with an information registry that can be accessed transparently from within workflows. dispel4py has been enhanced with a new run-time adaptive compression strategy to reduce the data stream volume and a diagnostic tool which monitors workflow performance and computes the most efficient parallelisation to use. dispel4py has been used by seismologists in the project VERCE for seismic ambient noise cross-correlation applications and for orchestrated HPC wave simulation and data misfit analysis workflows; two data-intensive problems that are common in today's research practice. Both have been tested in several local computing resources and later submitted to a variety of European PRACE HPC-architectures (e.g. SuperMUC & CINECA) for longer runs without change. Results show that dispel4py is an easy tool for developing, sharing and

  7. The USGS ``Did You Feel It?'' Internet-based Macroseismic Intensity Maps: Lessons Learned from a Decade of Online Data Collection (Invited)

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Quitoriano, V. R.; Hopper, M.; Mathias, S.; Dewey, J. W.

    2010-12-01

    Over the past decade, the U.S. Geological Survey’s “Did You Feel It?” (DYFI) system has automatically collected shaking and damage reports from Internet users immediately following earthquakes. This 10-yr stint of citizen-based science preceded the recently in vogue notion of "crowdsourcing" by nearly a decade. DYFI is a rapid and vast source of macroseismic data, providing quantitative and qualitative information about shaking intensities for earthquakes in the US and around the globe. Statistics attest to the abundance and rapid availability of these Internet-based macroseismic data: Over 1.8 million entries have been logged over the decade, and there are 30 events each with over 10,000 responses (230 events have over 1,000 entries). The greatest number of responses to date for an earthquake is over 78,000 for the April 2010, M7.2 Baja California, Mexico, event. Questionnaire response rates have reached 62,000 per hour (1,000 per min!) obviously requiring substantial web resource allocation and capacity. Outside the US, DYFI has gathered over 189,000 entries in 9,500 cities covering 140 countries since its global inception in late 2004. The rapid intensity data are automatically used in the Global ShakeMap (GSM) system, providing intensity constraints near population centers and in places without instrumental coverage (most of the world), and allowing for bias correction to the empirical prediction equations employed. ShakeMap has also been recently refined to automatically use macroseismic input data in their native form, and treat their uncertainties rigorously in concert with ground-motion data. Recent DYFI system improvements include a graphical user interface that allows seismic analysts to perform common functions, including map triggering and resizing , as well as sorting, searching, geocoding, and flagging entries. New web-based geolocation and geocoding services are being incorporated into DYFI for improving the accuracy of the users’ locations

  8. A nearly complete longitude-velocity map of neutral hydrogen

    NASA Technical Reports Server (NTRS)

    Waldes, F.

    1978-01-01

    A longitude-velocity map based on two recent 21-cm neutral hydrogen surveys and covering all but 42 deg of galactic longitude is presented. Latitude information between -2 and +2 deg is included as an integrated quantity by averaging the observed brightness temperatures over latitude at constant longitude and velocity to produce intensity information corresponding to a surface density distribution of neutral hydrogen in the galactic plane. The northern and southern rotation curves of the Galaxy within the solar galactic orbit are derived from the maximum radial velocities by the usual tangent-point method. Five interesting features of the map are discussed: (1) the scale of density variations in the neutral hydrogen; (2) a region of very high brightness centered at 81 deg and 0 km/s which is probably due to the spiral arm with which the sun is associated; (3) a region of very low brightness centered at 242 deg and 39 km/s; (4) negative-velocity features visible in the anticenter direction; and (5) a strong absorption feature at 289 deg having a kinematic distance of about 4 kpc.

  9. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  10. Multiredshift Limits on the 21 cm Power Spectrum from PAPER

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Pober, Jonathan C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki S.; Bowman, Judd; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Klima, Pat; Liu, Adrian; MacMahon, David H. E.; Manley, Jason R.; Moore, David F.; Stefan, Irina I.; Walbrugh, William P.

    2015-03-01

    The epoch of the reionization (EoR) power spectrum is expected to evolve strongly with redshift, and it is this variation with cosmic history that will allow us to begin to place constraints on the physics of reionization. The primary obstacle to the measurement of the EoR power spectrum is bright foreground emission. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope, which place new limits on the H i power spectrum over the redshift range of 7.5\\lt z\\lt 10.5, extending previously published single-redshift results to cover the full range accessible to the instrument. To suppress foregrounds, we use filtering techniques that take advantage of the large instrumental bandwidth to isolate and suppress foreground leakage into the interesting regions of k-space. Our 500 hr integration is the longest such yet recorded and demonstrates this method to a dynamic range of 104. Power spectra at different points across the redshift range reveal the variable efficacy of the foreground isolation. Noise-limited measurements of Δ2 at k = 0.2 hr Mpc-1 and z = 7.55 reach as low as (48 mK)2 (1σ). We demonstrate that the size of the error bars in our power spectrum measurement as generated by a bootstrap method is consistent with the fluctuations due to thermal noise. Relative to this thermal noise, most spectra exhibit an excess of power at a few sigma. The likely sources of this excess include residual foreground leakage, particularly at the highest redshift, unflagged radio frequency interference, and calibration errors. We conclude by discussing data reduction improvements that promise to remove much of this excess.

  11. Felt reports and intensity maps for two M4.8 Texas earthquakes: 17 May 2012 near Timpson and 20 October 2011 near Fashing

    NASA Astrophysics Data System (ADS)

    Brunt, M. R.; Brown, W. A.; Frohlich, C. A.

    2012-12-01

    We conducted felt report surveys for two M4.8 earthquakes that occurred in central and east Texas within the last year. These are larger than any previous historically reported earthquakes in central and east Texas. To collect felt information for both events, we had felt report questionnaires published in local newspapers, and followed up with telephone calls to respondents to confirm details and locations of their experience. To delineate the higher-intensity regions, we visited the region and spent several days interviewing local residents and taking photographs. We augmented these data with "did-you-feel-it" (DYFI) data provided by the U.S. Geological survey. The DYFI data proved especially useful for delineating the boundaries of the MMI IV and MMI III regions. The 17 May 2012 earthquake occurred in the early morning near Timpson, TX, about 50 km NE of Nacogdoches, and was felt with MMI III or greater over an area of about 20,000 km2. Numerous residents of Nacogdoches were awakened by the quake (MMI V). The highest intensities of MMI VII occurred south of Timpson in a 10 km2 region where chimneys, fireplaces, and brick veneer siding suffered significant damage. The 20 October 2011 earthquake occurred near Fashing, TX and the Fashing gas field, located about 80 km south of San Antonio. The quake was felt with MMI III or greater over an area of about 12,000 km2. Within a 65 km2 region people experienced intensities as great as MMI VI: the shaking displaced and broke numerous small objects and masonry cracked. The maximum-intensity region was about 10 km east of the epicenter reported by the National Earthquake Information Center. For future felt-report studies, we recommend our strategy of combining data collected from field interviews in the highest-intensity region, and augmenting these with newspaper felt report questionnaires and DYFI data obtained by the U.S. Geological survey. Field interviews can be essential—to obtain data in thinly populated areas

  12. Maps Showing Sea Floor Topography, Sun-Illuminated Sea Floor Topography, and Backscatter Intensity of Quadrangles 1 and 2 in the Great South Channel Region, Western Georges Bank

    USGS Publications Warehouse

    Valentine, Page C.; Middleton, Tammie J.; Malczyk, Jeremy T.; Fuller, Sarah J.

    2002-01-01

    The Great South Channel separates the western part of Georges Bank from Nantucket Shoals and is a major conduit for the exchange of water between the Gulf of Maine to the north and the Atlantic Ocean to the south. Water depths range mostly between 65 and 80 m in the region. A minimum depth of 45 m occurs in the east-central part of the mapped area, and a maximum depth of 100 m occurs in the northwest corner. The channel region is characterized by strong tidal and storm currents that flow dominantly north and south. Major topographic features of the seabed were formed by glacial and postglacial processes. Ice containing rock debris moved from north to south, sculpting the region into a broad shallow depression and depositing sediment to form the irregular depressions and low gravelly mounds and ridges that are visible in parts of the mapped area. Many other smaller glacial featuresprobably have been eroded by waves and currents at worksince the time when the region, formerly exposed bylowered sea level or occupied by ice, was invaded by the sea. The low, irregular and somewhat lumpy fabric formed by the glacial deposits is obscured in places by drifting sand and by the linear, sharp fabric formed by modern sand features. Today, sand transported by the strong north-south-flowing tidal and storm currents has formed large, east-west-trending dunes. These bedforms (ranging between 5 and 20 m in height) contrast strongly with, and partly mask, the subdued topography of the older glacial features.

  13. Mapping sound intensities by seating position in a university concert band: A risk of hearing loss, temporary threshold shifts, and comparisons with standards of OSHA and NIOSH

    NASA Astrophysics Data System (ADS)

    Holland, Nicholas Vedder, III

    Exposure to loud sounds is one of the leading causes of hearing loss in the United States. The purpose of the current research was to measure the sound pressure levels generated within a university concert band and determine if those levels exceeded permissible sound limits for exposure according to criteria set by the Occupational Safety and Health Administration (OSHA) and the National Institute of Occupational Safety and Health (NIOSH). Time-weighted averages (TWA) were obtained via a dosimeter during six rehearsals for nine members of the ensemble (plus the conductor), who were seated in frontal proximity to "instruments of power" (trumpets, trombones, and percussion; (Backus, 1977). Subjects received audiometer tests prior to and after each rehearsal to determine any temporary threshold shifts (TTS). Single sample t tests were calculated to compare TWA means and the maximum sound intensity exposures set by OSHA and NIOSH. Correlations were calculated between TWAs and TTSs, as well as TTSs and the number of semesters subjects reported being seated in proximity to instruments of power. The TWA-OSHA mean of 90.2 dBA was not significantly greater than the specified OSHA maximum standard of 90.0 dBA (p > .05). The TWA-NIOSH mean of 93.1 dBA was, however, significantly greater than the NIOSH specified maximum standard of 85.0 dBA (p < .05). The correlation between TWAs and TTSs was considered weak (r = .21 for OSHA, r = .20 for NIOSH); the correlation between TTSs and semesters of proximity to instruments of power was also considered weak (r = .13). TWAs cumulatively exceeded both association's sound exposure limits at 11 specified locations (nine subjects and both ears of the conductor) throughout the concert band's rehearsals. In addition, hearing acuity, as determined by TTSs, was substantially affected negatively by the intensities produced in the concert band. The researcher concluded that conductors, as well as their performers, must be aware of possible

  14. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  15. The response of the inductively coupled argon plasma to solvent plasma load: spatially resolved maps of electron density obtained from the intensity of one argon line

    NASA Astrophysics Data System (ADS)

    Weir, D. G. J.; Blades, M. W.

    1994-12-01

    A survey of spatially resolved electron number density ( ne) in the tail cone of the inductively coupled argon plasma (ICAP) is presented: all of the results of the survey have been radially inverted by numerical, asymmetric Abel inversion. The survey extends over the entire volume of the plasma beyond the exit of the ICAP torch; It extends over distances of z = 5-25 mm downstream from the induction coil, and over radial distances of ± 8 mm from the discharge axis. The survey also explores a range of inner argon flow rates ( QIN), solvent plasma load ( Qspl) and r.f. power: moreover, it explores loading by water, methanol and chloroform. Throughout the survey, ne was determined from the intensity of one, optically thin argon line, by a method which assumes that the atomic state distribution function (ASDF) for argon lies close to local thermal equilibrium (LTE). The validity of this assumption is reviewed. Also examined are the discrepancies between ne from this method and ne from Stark broadening measurements. With the error taken into account, the results of the survey reveal how time averaged values of ne in the ICAP respond over an extensive, previously unexplored range of experimental parameters. Moreover, the spatial information lends insight into how the thermal conditions and the transport of energy respond. Overall, the response may be described in terms of energy consumption along the axial channel and thermal pinch within the induction region. The predominating effect depends on the solvent plasma load, the solvent composition, the robustness of the discharge, and the distribution of solvent material over the argon stream.

  16. Relaxed Intensity

    ERIC Educational Resources Information Center

    Ramey, Kyle

    2004-01-01

    Relaxed intensity refers to a professional philosophy, demeanor, and way of life. It is the key to being an effective educational leader. To be successful one must be relaxed, which means managing stress efficiently, having fun, and enjoying work. Intensity allows one to get the job done and accomplish certain tasks or goals. Educational leaders…

  17. Handmade Multitextured Maps.

    ERIC Educational Resources Information Center

    Trevelyan, Simon

    1984-01-01

    Tactile maps for visually impaired persons can be made by drawing lines with an aqueous adhesive solution, dusting with thermoengraving powder, and exposing the card to a source of intense heat (such as a heat gun or microwave oven). A raised line map results. (CL)

  18. Model for the intense molecular line emission from OMC-1

    SciTech Connect

    Draine, B.T.; Roberge, W.G.

    1982-08-15

    We present a model which attributes the observed H/sub 2/ and CO line emission OMC-1 to a magnetohydrodynamic shock propagating into magnetized molecular gas. By requiring the shock to reporoduce the observed line intensities, we determine the shock speed to be v/sub s/roughly-equal38 km s/sup -1/ and the preshock density and (transverse) magnetic field to be n/sub H/roughly-equal7 x 10/sup 5/ cm/sup -3/, B/sub O/roughly-equal1.5 milligauss. The model is compared to observations of H/sub 2/, CO, OH, O I, and C I in emission and of CO in absorption. The shock gas may be detectible in H I 21 cm emission.

  19. Seabed maps showing topography, ruggedness, backscatter intensity, sediment mobility, and the distribution of geologic substrates in Quadrangle 6 of the Stellwagen Bank National Marine Sanctuary Region offshore of Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, Page C.; Gallea, Leslie B.

    2015-01-01

    The U.S. Geological Survey (USGS), in cooperation with the National Oceanic and Atmospheric Administration's National Marine Sanctuary Program, has conducted seabed mapping and related research in the Stellwagen Bank National Marine Sanctuary (SBNMS) region since 1993. The area is approximately 3,700 square kilometers (km2) and is subdivided into 18 quadrangles. Seven maps, at a scale of 1:25,000, of quadrangle 6 (211 km2) depict seabed topography, backscatter, ruggedness, geology, substrate mobility, mud content, and areas dominated by fine-grained or coarse-grained sand. Interpretations of bathymetric and seabed backscatter imagery, photographs, video, and grain-size analyses were used to create the geology-based maps. In all, data from 420 stations were analyzed, including sediment samples from 325 locations. The seabed geology map shows the distribution of 10 substrate types ranging from boulder ridges to immobile, muddy sand to mobile, rippled sand. Mapped substrate types are defined on the basis of sediment grain-size composition, surface morphology, sediment layering, the mobility or immobility of substrate surfaces, and water depth range. This map series is intended to portray the major geological elements (substrates, topographic features, processes) of environments within quadrangle 6. Additionally, these maps will be the basis for the study of the ecological requirements of invertebrate and vertebrate species that utilize these substrates and guide seabed management in the region.

  20. Active Mapping.

    ERIC Educational Resources Information Center

    Day, Dennis

    1994-01-01

    Explains a social studies lesson for third graders that uses KidPix, a computer software graphics program to help students make maps and map keys. Advantages to using the computer versus hand drawing maps are discussed, and an example of map requirements for the lesson is included. (LRW)

  1. Concept Mapping.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2001-01-01

    Explains concept mapping as a heuristic device that is helpful in visualizing the relationships between and among ideas. Highlights include how to begin a map; brainstorming; map applications, including document or information summaries and writing composition; and mind mapping to strengthen note-taking. (LRW)

  2. Contour Mapping

    NASA Technical Reports Server (NTRS)

    1995-01-01

    In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.

  3. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  4. GIS-mapping of environmental assessment of the territories in the region of intense activity for the oil and gas complex for achievement the goals of the Sustainable Development (on the example of Russia)

    NASA Astrophysics Data System (ADS)

    Yermolaev, Oleg

    2014-05-01

    The uniform system of complex scientific-reference ecological-geographical should act as a base for the maintenance of the Sustainable Development (SD) concept in the territories of the Russian Federation subjects or certain regions. In this case, the assessment of the ecological situation in the regions can be solved by the conjugation of the two interrelated system - the mapping and the geoinformational. The report discusses the methodological aspects of the Atlas-mapping for the purposes of SD in the regions of Russia. The Republic of Tatarstan viewed as a model territory where a large-scale oil-gas complex "Tatneft" PLC works. The company functions for more than 60 years. Oil fields occupy an area of more than 38 000 km2; placed in its territory about 40 000 oil wells, more than 55 000 km of pipelines; more than 3 billion tons of oil was extracted. Methods for to the structure and requirements for the Atlas's content were outlined. The approaches to mapping of "an ecological dominant" of SD conceptually substantiated following the pattern of a large region of Russia. Several trends of thematically mapping were suggested to be distinguished in the Atlas's structure: • The background history of oil-fields mine working; • The nature preservation technologies while oil extracting; • The assessment of natural conditions of a humans vital activity; • Unfavorable and dangerous natural processes and phenomena; • The anthropogenic effect and environmental surroundings change; • The social-economical processes and phenomena. • The medical-ecological and geochemical processes and phenomena; Within these groups the other numerous groups can distinguished. The maps of unfavorable and dangerous processes and phenomena subdivided in accordance with the types of processes - of endogenous and exogenous origin. Among the maps of the anthropogenic effects on the natural surroundings one can differentiate the maps of the influence on different nature's spheres

  5. RICH MAPS

    EPA Science Inventory

    Michael Goodchild recently gave eight reasons why traditional maps are limited as communication devices, and how interactive internet mapping can overcome these limitations. In the past, many authorities in cartography, from Jenks to Bertin, have emphasized the importance of sim...

  6. Kentucky map

    NASA Astrophysics Data System (ADS)

    A wall-sized geological map of Kentucky, the product of 18 years of work, has just been released. Produced by the U.S. Geological Survey (USGS) in cooperation with the Kentucky Geological Survey (KGS) at the University of Kentucky, the map is unique, according to state geologist Donald Haney, because it is the first and only state map ever produced in detailed form from geologic quadrangle maps already available from the KGS.At a scale of 1:250,000, the map shows the surface distribution of various types of rock throughout the state, as well as geologic structure, faults, and surface coal beds. Numerous geologic sections, stratigraphic diagrams, correlation charts, and structure sections accompany the map. Compiled by R. C. McDowell and S. L. Moore of the USGS and by G. J . Grabowski of the KGS, the map was made by photoreducing and generalizing the detailed geologic quadrangle maps.

  7. Map adventures

    USGS Publications Warehouse

    1994-01-01

    Map Adventures, with seven accompanying lessons, is appropriate for grades K-3. Students will learn basic concepts for visualizing objects from different perspectives and how to understand /and use maps.

  8. Jupiter Atmospheric Map

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Huge cyclonic storms, the Great Red Spot and the Little Red Spot, and wispy cloud patterns are seen in fascinating detail in this map of Jupiter's atmosphere obtained January 14-15, 2007, by the New Horizons Long Range Reconnaissance Imager (LORRI).

    The map combines information from 11 different LORRI images that were taken every hour over a 10-hour period -- a full Jovian day -- from 17:42 UTC on January 14 to 03:42 UTC on January 15. The New Horizons spacecraft was approximately 72 million kilometers (45 million miles) from Jupiter at the time.

    The LORRI pixels on the 'globe' of Jupiter were projected onto a rectilinear grid, similar to the way flat maps of Earth are created. The LORRI pixel intensities were corrected so that every point on the map appears as if the sun were directly overhead; some image sharpening was also applied to enhance detail. The polar regions of Jupiter are not shown on the map because the LORRI images do not sample those latitudes very well and artifacts are produced during the map-projection process.

  9. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  10. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  11. Mapping Van

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A NASA Center for the Commercial Development of Space (CCDS) - developed system for satellite mapping has been commercialized for the first time. Global Visions, Inc. maps an area while driving along a road in a sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. Data is fed into a computerized geographic information system (GIS). The resulting amps can be used for tax assessment purposes, emergency dispatch vehicles and fleet delivery companies as well as other applications.

  12. An intensity scale for riverine flooding

    USGS Publications Warehouse

    Fulford, J.M.

    2004-01-01

    Recent advances in the availability and accuracy of multi-dimensional flow models, the advent of precise elevation data for floodplains (LIDAR), and geographical GIS allow the creation of hazard maps that more correctly reflect the varying levels of flood-damage risk across a floodplain when inundatecby floodwaters. Using intensity scales for wind damages, an equivalent water-damage flow intensity scale has been developed that ranges from 1 (minimal effects) to 10 (major damages to most structures). This flow intensity scale, FIS, is portrayed on a map as color-coded areas of increasing flow intensity. This should prove to be a valuable tool to assess relative risk to people and property in known flood-hazard areas.

  13. Adding Context to James Webb Space Telescope Surveys with Current and Future 21 cm Radio Observations

    NASA Astrophysics Data System (ADS)

    Beardsley, A. P.; Morales, M. F.; Lidz, A.; Malloy, M.; Sutter, P. M.

    2015-02-01

    Infrared and radio observations of the Epoch of Reionization promise to revolutionize our understanding of the cosmic dawn, and major efforts with the JWST, MWA, and HERA are underway. While measurements of the ionizing sources with infrared telescopes and the effect of these sources on the intergalactic medium with radio telescopes should be complementary, to date the wildly disparate angular resolutions and survey speeds have made connecting proposed observations difficult. In this paper we develop a method to bridge the gap between radio and infrared studies. While the radio images may not have the sensitivity and resolution to identify individual bubbles with high fidelity, by leveraging knowledge of the measured power spectrum we are able to separate regions that are likely ionized from largely neutral, providing context for the JWST observations of galaxy counts and properties in each. By providing the ionization context for infrared galaxy observations, this method can significantly enhance the science returns of JWST and other infrared observations.

  14. Multi-redshift limits on the Epoch of Reionization 21cm power spectrum from PAPER

    NASA Astrophysics Data System (ADS)

    Jacobs, Danny; Pober, Jonathan; Parsons, Aaron; Paper Team

    2015-01-01

    The epoch of reionization hydrogen power spectrum is expected to vary strongly with redshift with cosmic history as star formation progressively ionizes the pervasive intergalactic hydrogen. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope which place new limits on the HI power spectrum over the redshift range of 7.5

  15. Calibration and Imaging for next generation 21cm EoR arrays

    NASA Astrophysics Data System (ADS)

    Sullivan, Ian S.; Morales, Miguel F.; Hazelton, Bryna; Beardsley, Adam; MWA Collaboration

    2015-01-01

    Next generation radio interferometer arrays such as the SKA precursor MWA and PAPER are collecting thousands of hours and Petabytes of data probing the Epoch of Reionization. The exceptionally wide fields of view and deep integrations demand new precision calibration and imaging techniques to incorporate full direction- and antenna- dependent effects while remaining computationally efficient. We demonstrate results from the MWA, showing the flexible but powerful abilities of Fast Holographic Deconvolution (FHD) and describe an imaging pipeline for HERA.

  16. Low noise parametric amplifiers for radio astronomy observations at 18-21 cm wavelength

    NASA Technical Reports Server (NTRS)

    Kanevskiy, B. Z.; Veselov, V. M.; Strukov, I. A.; Etkin, V. S.

    1974-01-01

    The principle characteristics and use of SHF parametric amplifiers for radiometer input devices are explored. Balanced parametric amplifiers (BPA) are considered as the SHF signal amplifiers allowing production of the amplifier circuit without a special filter to achieve decoupling. Formulas to calculate the basic parameters of a BPA are given. A modulator based on coaxial lines is discussed as the input element of the SHF. Results of laboratory tests of the receiver section and long-term stability studies of the SHF sector are presented.

  17. Will nonlinear peculiar velocity and inhomogeneous reionization spoil 21 cm cosmology from the epoch of reionization?

    PubMed

    Shapiro, Paul R; Mao, Yi; Iliev, Ilian T; Mellema, Garrelt; Datta, Kanan K; Ahn, Kyungjin; Koda, Jun

    2013-04-12

    The 21 cm background from the epoch of reionization is a promising cosmological probe: line-of-sight velocity fluctuations distort redshift, so brightness fluctuations in Fourier space depend upon angle, which linear theory shows can separate cosmological from astrophysical information. Nonlinear fluctuations in ionization, density, and velocity change this, however. The validity and accuracy of the separation scheme are tested here for the first time, by detailed reionization simulations. The scheme works reasonably well early in reionization (≲40% ionized), but not late (≳80% ionized). PMID:25167246

  18. How Ewen and Purcell discovered the 21-cm interstellar hydrogen line.

    NASA Astrophysics Data System (ADS)

    Stephan, K. D.

    1999-02-01

    The story of how Harold Irving Ewen and Edward Mills Purcell detected the first spectral line ever observed in radio astronomy, in 1951, has been told for general audiences by Robert Buderi (1996). The present article has a different purpose. The technical roots of Ewen and Purcell's achievement reveal much about the way science often depends upon "borrowed" technologies, which were not developed with the needs of science in mind. The design and construction of the equipment is described in detail. As Ewen's photographs, records, and recollections show, he and Purcell had access to an unusual combination of scientific knowledge, engineering know-how, critical hardware, and technical assistance at Harvard, in 1950 and 1951. This combination gave them a competitive edge over similar research groups in Holland and Australia, who were also striving to detect the hydrogen line, and who succeeded only weeks after the Harvard researchers did. The story also shows that Ewen and Purcell did their groundbreaking scientific work in the "small-science" style that prevailed before World War II, while receiving substantial indirect help from one of the first big-science projects at Harvard.

  19. Undersea Mapping.

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    1991-01-01

    Presented is a cooperative learning activity in which students assume different roles in an effort to produce a relief map of the ocean floor. Materials, procedures, definitions, student roles, and questions are discussed. A reproducible map for the activity is provided. (CW)

  20. Question Mapping

    ERIC Educational Resources Information Center

    Martin, Josh

    2012-01-01

    After accepting the principal position at Farmersville (TX) Junior High, the author decided to increase instructional rigor through question mapping because of the success he saw using this instructional practice at his prior campus. Teachers are the number one influence on student achievement (Marzano, 2003), so question mapping provides a…

  1. Map Adventures.

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet about maps, with seven accompanying lessons, is appropriate for students in grades K-3. Students learn basic concepts for visualizing objects from different perspectives and how to understand and use maps. Lessons in the packet center on a story about a little girl, Nikki, who rides in a hot-air balloon that gives her, and…

  2. A symbiotic approach to SETI observations: use of maps from the Westerbork Synthesis Radio Telescope

    NASA Technical Reports Server (NTRS)

    Tarter, J. C.; Israel, F. P.

    1982-01-01

    High spatial resolution continuum radio maps produced by the Westerbork Synthesis Radio Telescope (WSRT) of The Netherlands at frequencies near the 21 cm HI line have been examined for anomalous sources of emmission coincident with the locations of nearby bright stars. From a total of 542 stellar positions investigated, no candidates for radio stars or ETI signals were discovered to formal limits on the minimum detectable signal ranging from 7.7 x 10(-22) W/m2 to 6.4 x 10(-24) W/m2. This preliminary study has verified that data collected by radio astronomers at large synthesis arrays can profitably be analysed for SETI signals (in a non-interfering manner) provided only that the data are available in the form of a more or less standard two dimensional map format.

  3. Semantic Mapping.

    ERIC Educational Resources Information Center

    Johnson, Dale D.; And Others

    1986-01-01

    Describes semantic mapping, an effective strategy for vocabulary instruction that involves the categorical structuring of information in graphic form and requires students to relate new words to their own experience and prior knowledge. (HOD)

  4. Mapping Biodiversity.

    ERIC Educational Resources Information Center

    World Wildlife Fund, Washington, DC.

    This document features a lesson plan that examines how maps help scientists protect biodiversity and how plants and animals are adapted to specific ecoregions by comparing biome, ecoregion, and habitat. Samples of instruction and assessment are included. (KHR)

  5. Map Separates

    USGS Publications Warehouse

    U.S. Geological Survey

    2001-01-01

    U.S. Geological Survey (USGS) topographic maps are printed using up to six colors (black, blue, green, red, brown, and purple). To prepare your own maps or artwork based on maps, you can order separate black-and-white film positives or negatives for any color printed on a USGS topographic map, or for one or more of the groups of related features printed in the same color on the map (such as drainage and drainage names from the blue plate.) In this document, examples are shown with appropriate ink color to illustrate the various separates. When purchased, separates are black-and-white film negatives or positives. After you receive a film separate or composite from the USGS, you can crop, enlarge or reduce, and edit to add or remove details to suit your special needs. For example, you can adapt the separates for making regional and local planning maps or for doing many kinds of studies or promotions by using the features you select and then printing them in colors of your choice.

  6. Venus mapping

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Morgan, H. F.; Sucharski, Robert

    1991-01-01

    Semicontrolled image mosaics of Venus, based on Magellan data, are being compiled at 1:50,000,000, 1:10,000,000, 1:5,000,000, and 1:1,000,000 scales to support the Magellan Radar Investigator (RADIG) team. The mosaics are semicontrolled in the sense that data gaps were not filled and significant cosmetic inconsistencies exist. Contours are based on preliminary radar altimetry data that is subjected to revision and improvement. Final maps to support geologic mapping and other scientific investigations, to be compiled as the dataset becomes complete, will be sponsored by the Planetary Geology and Geophysics Program and/or the Venus Data Analysis Program. All maps, both semicontrolled and final, will be published as I-maps by the United States Geological Survey. All of the mapping is based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control on planets where framing cameras were used, is not feasible with the radar images of Venus, although an eventual shift of coordinate system to a revised spin-axis location is anticipated. This is expected to be small enough that it will affect only large-scale maps.

  7. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. PMID:26243476

  8. Intensive Intervention in Mathematics

    ERIC Educational Resources Information Center

    Powell, Sarah R.; Fuchs, Lynn S.

    2015-01-01

    Students who demonstrate persistent mathematics difficulties and whose performance is severely below grade level require "intensive intervention". Intensive intervention is an individualized approach to instruction that is more demanding and concentrated than Tier 2 intervention efforts. We present the elements of intensive intervention…

  9. Parametric mapping

    NASA Astrophysics Data System (ADS)

    Branch, Allan C.

    1998-01-01

    Parametric mapping (PM) lies midway between older and proven artificial landmark based guidance systems and yet to be realized vision based guidance systems. It is a simple yet effective natural landmark recognition system offering freedom from the need for enhancements to the environment. Development of PM systems can be inexpensive and rapid and they are starting to appear in commercial and industrial applications. Together with a description of the structural framework developed to generically describe robot mobility, this paper illustrates clearly the parts of any mobile robot navigation and guidance system and their interrelationships. Among other things, the importance of the richness of the reference map, and not necessarily the sensor map, is introduced, the benefits of dynamic path planners to alleviate the need for separate object avoidance, and the independence of the PM system to the type of sensor input is shown.

  10. Determination of Jet Noise Radiation Patterns and Source Locations using 2-Dimensional Intensity Measurements

    NASA Technical Reports Server (NTRS)

    Jaeger, S. M.; Allen, C. S.

    1999-01-01

    Contents include the following: (1) Outline Jet Noise extrapolation to far field. (2) Two dimensional sound intensity. (3) Anechoic chamber cold jet test. (4) Results: Intensity levels. Vector maps. Source location centroids. Directivity. and (5) Conclusions.

  11. Memphis Maps.

    ERIC Educational Resources Information Center

    Hyland, Stanley; Cox, David; Martin, Cindy

    1998-01-01

    The Memphis Maps program, a collaborative effort of Memphis (Tennessee) educational institutions, public agencies, a bank, and community programs, trains local students in Geographic Information Systems technology and provides the community with valuable demographic and assessment information. The program is described, and factors contributing to…

  12. Intensive Care, Intense Conflict: A Balanced Approach.

    PubMed

    Paquette, Erin Talati; Kolaitis, Irini N

    2015-01-01

    Caring for a child in a pediatric intensive care unit is emotionally and physically challenging and often leads to conflict. Skilled mediators may not always be available to aid in conflict resolution. Careproviders at all levels of training are responsible for managing difficult conversations with families and can often prevent escalation of conflict. Bioethics mediators have acknowledged the important contribution of mediation training in improving clinicians' skills in conflict management. Familiarizing careproviders with basic mediation techniques is an important step towards preventing escalation of conflict. While training in effective communication is crucial, a sense of fairness and justice that may only come with the introduction of a skilled, neutral third party is equally important. For intense conflict, we advocate for early recognition, comfort, and preparedness through training of clinicians in de-escalation and optimal communication, along with the use of more formally trained third-party mediators, as required. PMID:26752393

  13. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial

  14. Genetic mapping in grapevine using a SNP microarray: intensity values

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genotyping microarrays are widely used for genome wide association studies, but in high-diversity organisms, the quality of SNP calls can be diminished by genetic variation near the assayed nucleotide. To address this limitation in grapevine, we developed a simple heuristic that uses hybridization i...

  15. Mapping Malaria Transmission Intensity in Malawi, 2000–2010

    PubMed Central

    Bennett, Adam; Kazembe, Lawrence; Mathanga, Don P.; Kinyoki, Damaris; Ali, Doreen; Snow, Robert W.; Noor, Abdisalan M.

    2013-01-01

    Substantial development assistance has been directed towards reducing the high malaria burden in Malawi over the past decade. We assessed changes in transmission over this period of malaria control scale-up by compiling community Plasmodium falciparum rate (PfPR) data during 2000–2011 and used model-based geostatistical methods to predict mean PfPR2–10 in 2000, 2005, and 2010. In addition, we calculated population-adjusted prevalences and populations at risk by district to inform malaria control program priority setting. The national population-adjusted PfPR2–10 was 37% in 2010, and we found no evidence of change over this period of scale-up. The entire population of Malawi is under meso-endemic transmission risk, with those in districts along the shore of Lake Malawi and Shire River Valley under highest risk. The lack of change in prevalence confirms modeling predictions that when compared with lower transmission, prevalence reductions in high transmission settings require greater investment and longer time scales. PMID:24062477

  16. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-01-01

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  17. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-02-06

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  18. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  19. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  20. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  1. Seismicity map of the state of Georgia

    USGS Publications Warehouse

    Reagor, B. Glen; Stover, C.W.; Algermissen, S.T.; Long, L.T.

    1991-01-01

    This map is one of a series of seismicity maps produced by the U.S. Geological Survey that show earthquake data of individual states or groups of states at the scale of 1:1,000,000. This maps shows only those earthquakes with epicenters located within the boundaries of Georgia, even though earthquakes in nearby states or countries may have been felt or may have cause damage in Georgia. The data in table 1 were used to compile the seismicity map; these data are a corrected, expanded, and updated (through 1987) version of the data used by Algermissen (1969) for a study of seismic risk in the United States. The locations and intensities of some earthquakes were revised and intensities were assigned where none had been before. Many earthquakes were added to the original list from new data sources as well as from some old data sources that has not been previously used. The data in table 1 represent best estimates of the location of the epicenter, magnitude, and intensity of each earthquake on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed, but not all, especially for earthquakes that occurred before seismic instruments were universally used. The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the Arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercoili intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location, The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  2. Exploring intense attosecond pulses

    NASA Astrophysics Data System (ADS)

    Charalambidis, D.; Tzallas, P.; Benis, E. P.; Skantzakis, E.; Maravelias, G.; Nikolopoulos, L. A. A.; Peralta Conde, A.; Tsakiris, G. D.

    2008-02-01

    After introducing the importance of non-linear processes in the extreme-ultra-violet (XUV) spectral regime to the attosecond (asec) pulse metrology and time domain applications, we present two successfully implemented techniques with excellent prospects in generating intense asec pulse trains and isolated asec pulses, respectively. For the generation of pulse trains two-color harmonic generation is exploited. The interferometric polarization gating technique appropriate for the generation of intense isolated asec pulses is discussed and compared to other relevant approaches.

  3. Improving Visual Saliency Computing With Emotion Intensity.

    PubMed

    Liu, Huiying; Xu, Min; Wang, Jinqiao; Rao, Tianrong; Burnett, Ian

    2016-06-01

    Saliency maps that integrate individual feature maps into a global measure of visual attention are widely used to estimate human gaze density. Most of the existing methods consider low-level visual features and locations of objects, and/or emphasize the spatial position with center prior. Recent psychology research suggests that emotions strongly influence human visual attention. In this paper, we explore the influence of emotional content on visual attention. On top of the traditional bottom-up saliency map generation, our saliency map is generated in cooperation with three emotion factors, i.e., general emotional content, facial expression intensity, and emotional object locations. Experiments, carried out on National University of Singapore Eye Fixation (a public eye tracking data set), demonstrate that incorporating emotion does improve the quality of visual saliency maps computed by bottom-up approaches for the gaze density estimation. Our method increases about 0.1 on an average of area under the curve of receiver operation characteristic curve, compared with the four baseline bottom-up approaches (Itti's, attention based on information maximization, saliency using natural, and graph-based vision saliency). PMID:27214350

  4. Defect mapping system

    DOEpatents

    Sopori, B.L.

    1995-04-11

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities. 20 figures.

  5. Defect mapping system

    DOEpatents

    Sopori, Bhushan L.

    1995-01-01

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities.

  6. Diffuse gamma radiation. [intensity, energy spectrum and spatial distribution from SAS 2 observations

    NASA Technical Reports Server (NTRS)

    Fichtel, C. E.; Simpson, G. A.; Thompson, D. J.

    1978-01-01

    Results are reported for an investigation of the intensity, energy spectrum, and spatial distribution of the diffuse gamma radiation detected by SAS 2 away from the galactic plane in the energy range above 35 MeV. The gamma-ray data are compared with relevant data obtained at other wavelengths, including 21-cm emission, radio continuum radiation, and the limited UV and radio information on local molecular hydrogen. It is found that there are two quite distinct components to the diffuse radiation, one of which shows a good correlation with the galactic matter distribution and continuum radiation, while the other has a much steeper energy spectrum and appears to be isotropic at least on a coarse scale. The galactic component is interpreted in terms of its implications for both local and more distant regions of the Galaxy. The apparently isotropic radiation is discussed partly with regard to the constraints placed on possible models by the steep energy spectrum, the observed intensity, and an upper limit on the anisotropy.

  7. GHIGLS: H I Mapping at Intermediate Galactic Latitude Using the Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Martin, P. G.; Blagrave, K. P. M.; Lockman, Felix J.; Pinheiro Gonçalves, D.; Boothroyd, A. I.; Joncas, G.; Miville-Deschênes, M.-A.; Stephan, G.

    2015-08-01

    This paper introduces and describes the data cubes from GHIGLS, deep Green Bank Telescope (GBT) surveys of the 21 cm line emission of H i in 37 targeted fields at intermediate Galactic latitude. The GHIGLS fields together cover over 1000 deg2 at 9\\buildrel{ \\prime}\\over{.} 55 spatial resolution. The H i spectra have an effective velocity resolution of about 1.0 km s-1 and cover at least -450\\lt {v}{LSR}\\lt +250 km s-1, extending to {v}{LSR}\\lt +450 km s-1 for most fields. As illustrated with various visualizations of the H i data cubes, GHIGLS highlights that even at intermediate Galactic latitude the interstellar medium is very complex. Spatial structure of the H i is quantified through power spectra of maps of the integrated line emission or column density, {N}{{H} {{I}}}. For our featured representative field, centered on the north ecliptic pole, the scaling exponents in power-law representations of the power spectra of {N}{{H} {{I}}} maps for low-, intermediate-, and high-velocity gas components (LVC, IVC, and HVC) are -2.86+/- 0.04, -2.69+/- 0.04, and -2.59+/- 0.07, respectively. After Gaussian decomposition of the line profiles, {N}{{H} {{I}}} maps were also made corresponding to the narrow-line and broad-line components in the LVC range; for the narrow-line map the exponent is -1.9+/- 0.1, reflecting more small-scale structure in the cold neutral medium (CNM). There is evidence that filamentary structure in the H i CNM is oriented parallel to the Galactic magnetic field. The power spectrum analysis also offers insight into the various contributions to uncertainty in the data, yielding values close to those obtained using diagnostics developed in our earlier independent analysis. The effect of 21 cm line opacity on the GHIGLS {N}{{H} {{I}}} maps is estimated. Comparisons of the GBT data in a few of the GHIGLS fields with data from the EBHIS and GASS surveys explore potential issues in data reduction and calibration and reveal good agreement. The high

  8. High intensity hadron accelerators

    SciTech Connect

    Teng, L.C.

    1989-05-01

    This rapporteur report consists mainly of two parts. Part I is an abridged review of the status of all High Intensity Hadron Accelerator projects in the world in semi-tabulated form for quick reference and comparison. Part II is a brief discussion of the salient features of the different technologies involved. The discussion is based mainly on my personal experiences and opinions, tempered, I hope, by the discussions I participated in in the various parallel sessions of the workshop. In addition, appended at the end is my evaluation and expression of the merits of high intensity hadron accelerators as research facilities for nuclear and particle physics.

  9. Human Mind Maps

    ERIC Educational Resources Information Center

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  10. Interpreting the Unresolved Intensity of Cosmologically Redshifted Line Radiation

    NASA Astrophysics Data System (ADS)

    Switzer, E. R.; Chang, T.-C.; Masui, K. W.; Pen, U.-L.; Voytek, T. C.

    2015-12-01

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically ~102-103 times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of foregrounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  11. Interpreting The Unresolved Intensity Of Cosmologically Redshifted Line Radiation

    NASA Technical Reports Server (NTRS)

    Switzer, E. R.; Chang, T.-C.; Masui, K. W.; Pen, U.-L.; Voytek, T. C.

    2016-01-01

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically approximately 10(sup 2)-10(Sup 3) times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of fore-grounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  12. Concept Mapping

    PubMed Central

    Brennan, Laura K.; Brownson, Ross C.; Kelly, Cheryl; Ivey, Melissa K.; Leviton, Laura C.

    2016-01-01

    Background From 2003 to 2008, 25 cross-sector, multidisciplinary community partnerships funded through the Active Living by Design (ALbD) national program designed, planned, and implemented policy and environmental changes, with complementary programs and promotions. This paper describes the use of concept-mapping methods to gain insights into promising active living intervention strategies based on the collective experience of community representatives implementing ALbD initiatives. Methods Using Concept Systems software, community representatives (n=43) anonymously generated actions and changes in their communities to support active living (183 original statements, 79 condensed statements). Next, respondents (n=26, from 23 partnerships) sorted the 79 statements into self-created categories, or active living intervention approaches. Respondents then rated statements based on their perceptions of the most important strategies for creating community changes (n=25, from 22 partnerships) and increasing community rates of physical activity (n=23, from 20 partnerships). Cluster analysis and multidimensional scaling were used to describe data patterns. Results ALbD community partnerships identified three active living intervention approaches with the greatest perceived importance to create community change and increase population levels of physical activity: changes to the built and natural environment, partnership and collaboration efforts, and land-use and transportation policies. The relative importance of intervention approaches varied according to subgroups of partnerships working with different populations. Conclusions Decision makers, practitioners, and community residents can incorporate what has been learned from the 25 community partnerships to prioritize active living policy, physical project, promotional, and programmatic strategies for work in different populations and settings. PMID:23079266

  13. Learning and Intensive Instruction.

    ERIC Educational Resources Information Center

    Murphy, Dennis R.

    1979-01-01

    Reports on the results of an intensive two-week economics institute conducted at Emory University in 1978 to help high school classroom teachers comply with a mandate that all students must take a course in principles of economics, business, and free enterprise. (DB)

  14. High-frequency multi-wavelength acoustic power maps

    NASA Astrophysics Data System (ADS)

    Hill, Frank; Ladenkov, Oleg; Ehgamberdiev, Shuhrat; Chou, Dean-Yi

    2001-01-01

    Acoustic power maps have been constructed using SOHO/MDI velocity and intensity data in Ni I 6768; NSO High-L Helioseismometer (HLH) Ca K intensity; and Taiwan Oscillation Network (TON) intensity in Ca K. The HLH data provides maps up to a frequency of 11.9 mHz, substantially higher than the usual 8.33 mHz. The Ca K observations show a surprising strong enhancement of power within a sunspot at all temporal frequencies, while the Ni I data show the well-known suppression of power. Tests suggest that this apparent acoustic enhancement is the result of strong intensity gradients observed through terrestrial seeing.

  15. Maps & minds : mapping through the ages

    USGS Publications Warehouse

    U.S. Geological Survey

    1984-01-01

    Throughout time, maps have expressed our understanding of our world. Human affairs have been influenced strongly by the quality of maps available to us at the major turning points in our history. "Maps & Minds" traces the ebb and flow of a few central ideas in the mainstream of mapping. Our expanding knowledge of our cosmic neighborhood stems largely from a small number of simple but grand ideas, vigorously pursued.

  16. Variable Sampling Mapping

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey, S.; Aronstein, David L.; Dean, Bruce H.; Lyon, Richard G.

    2012-01-01

    The performance of an optical system (for example, a telescope) is limited by the misalignments and manufacturing imperfections of the optical elements in the system. The impact of these misalignments and imperfections can be quantified by the phase variations imparted on light traveling through the system. Phase retrieval is a methodology for determining these variations. Phase retrieval uses images taken with the optical system and using a light source of known shape and characteristics. Unlike interferometric methods, which require an optical reference for comparison, and unlike Shack-Hartmann wavefront sensors that require special optical hardware at the optical system's exit pupil, phase retrieval is an in situ, image-based method for determining the phase variations of light at the system s exit pupil. Phase retrieval can be used both as an optical metrology tool (during fabrication of optical surfaces and assembly of optical systems) and as a sensor used in active, closed-loop control of an optical system, to optimize performance. One class of phase-retrieval algorithms is the iterative transform algorithm (ITA). ITAs estimate the phase variations by iteratively enforcing known constraints in the exit pupil and at the detector, determined from modeled or measured data. The Variable Sampling Mapping (VSM) technique is a new method for enforcing these constraints in ITAs. VSM is an open framework for addressing a wide range of issues that have previously been considered detrimental to high-accuracy phase retrieval, including undersampled images, broadband illumination, images taken at or near best focus, chromatic aberrations, jitter or vibration of the optical system or detector, and dead or noisy detector pixels. The VSM is a model-to-data mapping procedure. In VSM, fully sampled electric fields at multiple wavelengths are modeled inside the phase-retrieval algorithm, and then these fields are mapped to intensities on the light detector, using the properties

  17. Mapping: A Course.

    ERIC Educational Resources Information Center

    Whitmore, Paul M.

    1988-01-01

    Reviews the history of cartography. Describes the contributions of Strabo and Ptolemy in early maps. Identifies the work of Gerhard Mercator as the most important advancement in mapping. Discusses present mapping standards from history. (CW)

  18. Saliency Mapping Enhanced by Structure Tensor

    PubMed Central

    He, Zhiyong; Chen, Xin; Sun, Lining

    2015-01-01

    We propose a novel efficient algorithm for computing visual saliency, which is based on the computation architecture of Itti model. As one of well-known bottom-up visual saliency models, Itti method evaluates three low-level features, color, intensity, and orientation, and then generates multiscale activation maps. Finally, a saliency map is aggregated with multiscale fusion. In our method, the orientation feature is replaced by edge and corner features extracted by a linear structure tensor. Following it, these features are used to generate contour activation map, and then all activation maps are directly combined into a saliency map. Compared to Itti method, our method is more computationally efficient because structure tensor is more computationally efficient than Gabor filter that is used to compute the orientation feature and our aggregation is a direct method instead of the multiscale operator. Experiments on Bruce's dataset show that our method is a strong contender for the state of the art. PMID:26788050

  19. Development of Ontario ShakeMaps

    NASA Astrophysics Data System (ADS)

    Atkinson, G. M.; Kaka, S. I.; Soh, S. L.

    2004-05-01

    We have developed automated procedures to produce "ShakeMaps" in near-real-time for earthquakes in southern and central Ontario. ShakeMaps are maps that show the intensity of ground shaking at locations throughout the region, for purposes of providing rapid public, planning and emergency response information in the immediate aftermath of local and regional earthquakes. The Ontario ShakeMap program continually accesses real-time data from seismographic stations of the POLARIS (Portable Observatories for Lithospheric Analysis and Research Investigating Seismicity) and CNSN (Canadian National Seismographic Network) arrays. When an earthquake is detected, ShakeMap uses the data to find the centroid location and magnitude of the event. The centroid is a geographic location near the largest recorded ground motion, from which the ground motion appears to radiate (based on the pattern of observed amplitudes in the region). The centroid magnitude is the earthquake magnitude that best explains the observed ground motions, given the centroid location and regional ground motion relations. A modified version of the regional ground motion relation of Atkinson and Boore (1995), giving peak ground velocity (PGV) as a function of magnitude and distance, is used in the determination of the centroid's location and magnitude. ShakeMap uses a combination of computed ground motions that are based on the centroid and the regional PGV ground-motion relation, along with the actual measured ground motions at all stations, to create a contour map of PGV. The PGV map is also translated into a map of felt intensity/damage, using a relationship between PGV and Modified Mercalli Intensity. The maps are still under development, as improvements are required in the following aspects: (i) determination of site response factors throughout the region; (ii) development of improved predictive relations for PGV from earthquake magnitude and distance; and (iii) implementation of maps for other ground

  20. Intense fusion neutron sources

    NASA Astrophysics Data System (ADS)

    Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.

    2010-04-01

    The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 1015-1021 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 1020 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.

  1. NEUTRON FLUX INTENSITY DETECTION

    DOEpatents

    Russell, J.T.

    1964-04-21

    A method of measuring the instantaneous intensity of neutron flux in the core of a nuclear reactor is described. A target gas capable of being transmuted by neutron bombardment to a product having a resonance absorption line nt a particular microwave frequency is passed through the core of the reactor. Frequency-modulated microwave energy is passed through the target gas and the attenuation of the energy due to the formation of the transmuted product is measured. (AEC)

  2. Water intensity of transportation.

    PubMed

    King, Carey W; Webber, Michael E

    2008-11-01

    As the need for alternative transportation fuels increases, it is important to understand the many effects of introducing fuels based upon feedstocks other than petroleum. Water intensity in "gallons of water per mile traveled" is one method to measure these effects on the consumer level. In this paper we investigate the water intensity for light duty vehicle (LDV) travel using selected fuels based upon petroleum, natural gas, unconventional fossil fuels, hydrogen, electricity, and two biofuels (ethanol from corn and biodiesel from soy). Fuels more directly derived from fossil fuels are less water intensive than those derived either indirectly from fossil fuels (e.g., through electricity generation) or directly from biomass. The lowest water consumptive (<0.15 gal H20/mile) and withdrawal (<1 gal H2O/mile) rates are for LDVs using conventional petroleum-based gasoline and diesel, nonirrigated biofuels, hydrogen derived from methane or electrolysis via nonthermal renewable electricity, and electricity derived from nonthermal renewable sources. LDVs running on electricity and hydrogen derived from the aggregate U.S. grid (heavily based upon fossil fuel and nuclear steam-electric power generation) withdraw 5-20 times and consume nearly 2-5 times more water than by using petroleum gasoline. The water intensities (gal H20/mile) of LDVs operating on biofuels derived from crops irrigated in the United States at average rates is 28 and 36 for corn ethanol (E85) for consumption and withdrawal, respectively. For soy-derived biodiesel the average consumption and withdrawal rates are 8 and 10 gal H2O/mile. PMID:19031873

  3. Null Stellar Intensity Interferometry

    NASA Astrophysics Data System (ADS)

    Tan, P. K.; Chia, C. M.; Han, W. D.; Chan, A. H.; Kurtsiefer, C.

    2014-04-01

    Since the discovery of the first exoplanet in 1989, though over 850 candidates have been verified (Schneider 2012), few are similar to our Earth in terms of mass and size. Hence here we would like to propose the revival and improvement of optical intensity interferometry to achieve sub-milliarcsecond resolution, which promises also to be less sensitive to weather conditions, light pollution and optomechanical alignments, yet only requiring baselines <100m.

  4. Intense ion beam generator

    DOEpatents

    Humphries, Jr., Stanley; Sudan, Ravindra N.

    1977-08-30

    Methods and apparatus for producing intense megavolt ion beams are disclosed. In one embodiment, a reflex triode-type pulsed ion accelerator is described which produces ion pulses of more than 5 kiloamperes current with a peak energy of 3 MeV. In other embodiments, the device is constructed so as to focus the beam of ions for high concentration and ease of extraction, and magnetic insulation is provided to increase the efficiency of operation.

  5. Tone-Mapped Mean-Shift Based Environment Map Sampling.

    PubMed

    Feng, Wei; Yang, Ying; Wan, Liang; Yu, Changguo

    2016-09-01

    In this paper, we present a novel approach for environment map sampling, which is an effective and pragmatic technique to reduce the computational cost of realistic rendering and get plausible rendering images. The proposed approach exploits the advantage of adaptive mean-shift image clustering with aid of tone-mapping, yielding oversegmented strata that have uniform intensities and capture shapes of light regions. The resulted strata, however, have unbalanced importance metric values for rendering, and the strata number is not user-controlled. To handle these issues, we develop an adaptive split-and-merge scheme that refines the strata and obtains a better balanced strata distribution. Compared to the state-of-the-art methods, our approach achieves comparable and even better rendering quality in terms of SSIM, RMSE and HDRVDP2 image quality metrics. Experimental results further show that our approach is more robust to the variation of viewpoint, environment rotation, and sample number. PMID:26584494

  6. Measurement of Itch Intensity.

    PubMed

    Reich, Adam; Szepietowski, Jacek C

    2016-01-01

    Measurement of itch intensity is essential to properly evaluate pruritic disease severity, to understand the patients' needs and burden, and especially to assess treatment efficacy, particularly in clinical trials. However, measurement of itch remains a challenge, as, per definition, it is a subjective sensation and assessment of this symptom represents significant difficulty. Intensity of itch must be considered in relation to its duration, localization, course of symptoms, presence and type of scratch lesions, response to antipruritic treatment, and quality of life impairment. Importantly, perception of itch may also be confounded by different cofactors including but not limited to patient general condition and other coexisting ailments. In the current chapter we characterize the major methods of itch assessments that are used in daily clinical life and as research tools. Different methods of itch assessment have been developed; however, so far none is without limitations and any data on itch intensity should always be interpreted with caution. Despite these limitations, it is strongly recommended to implement itch measurement tools in routine daily practice, as it would help in proper assessment of patient clinical status. In order to improve evaluation of itch in research studies, it is recommended to use at least two independent methods, as such an approach should increase the validity of achieved results. PMID:27578068

  7. Mapping of bird distributions from point count surveys

    USGS Publications Warehouse

    Sauer, J.R.; Pendleton, G.W.; Orsillo, S.

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.

  8. Airborne infrared mineral mapping survey of Marysvale, Utah

    NASA Technical Reports Server (NTRS)

    Collins, W.; Chang, S. H.

    1982-01-01

    Infrared spectroradiometer survey results from flights over the Marysvale, Utah district show that hydrothermal alteration mineralogy can be mapped using very rapid and effective airborne techniques. The system detects alteration mineral absorption band intensities in the infrared spectral region with high sensitivity. The higher resolution spectral features and high spectral differences characteristic of the various clay and carbonate minerals are also readily identified by the instrument allowing the mineralogy to be mapped as well as the mineralization intensity.

  9. Human Prostate Cancer Hallmarks Map.

    PubMed

    Datta, Dipamoy; Aftabuddin, Md; Gupta, Dinesh Kumar; Raha, Sanghamitra; Sen, Prosenjit

    2016-01-01

    Human prostate cancer is a complex heterogeneous disease that mainly affects elder male population of the western world with a high rate of mortality. Acquisitions of diverse sets of hallmark capabilities along with an aberrant functioning of androgen receptor signaling are the central driving forces behind prostatic tumorigenesis and its transition into metastatic castration resistant disease. These hallmark capabilities arise due to an intense orchestration of several crucial factors, including deregulation of vital cell physiological processes, inactivation of tumor suppressive activity and disruption of prostate gland specific cellular homeostasis. The molecular complexity and redundancy of oncoproteins signaling in prostate cancer demands for concurrent inhibition of multiple hallmark associated pathways. By an extensive manual curation of the published biomedical literature, we have developed Human Prostate Cancer Hallmarks Map (HPCHM), an onco-functional atlas of human prostate cancer associated signaling and events. It explores molecular architecture of prostate cancer signaling at various levels, namely key protein components, molecular connectivity map, oncogenic signaling pathway map, pathway based functional connectivity map etc. Here, we briefly represent the systems level understanding of the molecular mechanisms associated with prostate tumorigenesis by considering each and individual molecular and cell biological events of this disease process. PMID:27476486

  10. HEND Maps of Epithermal Neutrons

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Observations by NASA's 2001 Mars Odyssey spacecraft show a global view of Mars in intermediate-energy, or epithermal, neutrons. These maps are based on data acquired by the high-energy neutron detector, one of the instruments in the gamma ray spectrometer suite. Soil enriched by hydrogen is indicated by the purple and deep blue colors on the maps, which show a low intensity of epithermal neutrons. Progressively smaller amounts of hydrogen are shown in the colors light blue, green, yellow and red. Hydrogen in the far north is hidden at this time beneath a layer of carbon dioxide frost (dry ice). These observations were acquired during the first two months of mapping operations. Contours of topography are superimposed on these maps for geographic reference.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson, and NASA's Johnson Space Center, Houston, operate the science instruments. The gamma-ray spectrometer was provided by the University of Arizona in collaboration with the Russian Aviation and Space Agency, which provided the high-energy neutron detector, and the Los Alamos National Laboratories, New Mexico, which provided the neutron spectrometer. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  11. Human Prostate Cancer Hallmarks Map

    PubMed Central

    Datta, Dipamoy; Aftabuddin, Md.; Gupta, Dinesh Kumar; Raha, Sanghamitra; Sen, Prosenjit

    2016-01-01

    Human prostate cancer is a complex heterogeneous disease that mainly affects elder male population of the western world with a high rate of mortality. Acquisitions of diverse sets of hallmark capabilities along with an aberrant functioning of androgen receptor signaling are the central driving forces behind prostatic tumorigenesis and its transition into metastatic castration resistant disease. These hallmark capabilities arise due to an intense orchestration of several crucial factors, including deregulation of vital cell physiological processes, inactivation of tumor suppressive activity and disruption of prostate gland specific cellular homeostasis. The molecular complexity and redundancy of oncoproteins signaling in prostate cancer demands for concurrent inhibition of multiple hallmark associated pathways. By an extensive manual curation of the published biomedical literature, we have developed Human Prostate Cancer Hallmarks Map (HPCHM), an onco-functional atlas of human prostate cancer associated signaling and events. It explores molecular architecture of prostate cancer signaling at various levels, namely key protein components, molecular connectivity map, oncogenic signaling pathway map, pathway based functional connectivity map etc. Here, we briefly represent the systems level understanding of the molecular mechanisms associated with prostate tumorigenesis by considering each and individual molecular and cell biological events of this disease process. PMID:27476486

  12. Mapping the Heart

    ERIC Educational Resources Information Center

    Hulse, Grace

    2012-01-01

    In this article, the author describes how her fourth graders made ceramic heart maps. The impetus for this project came from reading "My Map Book" by Sara Fanelli. This book is a collection of quirky, hand-drawn and collaged maps that diagram a child's world. There are maps of her stomach, her day, her family, and her heart, among others. The…

  13. Fundamentals of Physical Mapping

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter provides an overview of physical mapping in plants and its use for map-based gene cloning. A brief overview of cytogenetics-based physical mapping strategies, and physical mapping approaches currently used and the lessons learnt from the success stories were furnished. The statisti...

  14. Ground-Based Sensing System for Weed Mapping in Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A ground-based weed mapping system was developed to measure weed intensity and distribution in a cotton field. The weed mapping system includes WeedSeeker® PhD600 sensor modules to indicate the presence of weeds between rows, a GPS receiver to provide spatial information, and a data acquisition and ...

  15. Mapping, Monitoring, and Assessment of Soil Salinity at Field Scales

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In the past, spatial and temporal variability has made it difficult to measure, map, and monitor soil salinity at field scales. Large numbers of soil samples were needed both across the landscape and within the soil profile to map field-scale salinity, making the task too labor and cost intensive t...

  16. Angola Seismicity MAP

    NASA Astrophysics Data System (ADS)

    Neto, F. A. P.; Franca, G.

    2014-12-01

    The purpose of this job was to study and document the Angola natural seismicity, establishment of the first database seismic data to facilitate consultation and search for information on seismic activity in the country. The study was conducted based on query reports produced by National Institute of Meteorology and Geophysics (INAMET) 1968 to 2014 with emphasis to the work presented by Moreira (1968), that defined six seismogenic zones from macro seismic data, with highlighting is Zone of Sá da Bandeira (Lubango)-Chibemba-Oncócua-Iona. This is the most important of Angola seismic zone, covering the epicentral Quihita and Iona regions, geologically characterized by transcontinental structure tectono-magmatic activation of the Mesozoic with the installation of a wide variety of intrusive rocks of ultrabasic-alkaline composition, basic and alkaline, kimberlites and carbonatites, strongly marked by intense tectonism, presenting with several faults and fractures (locally called corredor de Lucapa). The earthquake of May 9, 1948 reached intensity VI on the Mercalli-Sieberg scale (MCS) in the locality of Quihita, and seismic active of Iona January 15, 1964, the main shock hit the grade VI-VII. Although not having significant seismicity rate can not be neglected, the other five zone are: Cassongue-Ganda-Massano de Amorim; Lola-Quilengues-Caluquembe; Gago Coutinho-zone; Cuima-Cachingues-Cambândua; The Upper Zambezi zone. We also analyzed technical reports on the seismicity of the middle Kwanza produced by Hidroproekt (GAMEK) region as well as international seismic bulletins of the International Seismological Centre (ISC), United States Geological Survey (USGS), and these data served for instrumental location of the epicenters. All compiled information made possible the creation of the First datbase of seismic data for Angola, preparing the map of seismicity with the reconfirmation of the main seismic zones defined by Moreira (1968) and the identification of a new seismic

  17. National Atlas maps

    USGS Publications Warehouse

    U.S. Geological Survey

    1991-01-01

    The National Atlas of the United States of America was published by the U.S. Geological Survey in 1970. Its 765 maps and charts are on 335 14- by 19-inch pages. Many of the maps span facing pages. It's worth a quick trip to the library just to leaf through all 335 pages of this book. Rapid scanning of its thematic maps yields rich insights to the geography of issues of continuing national interest. On most maps, the geographic patterns are still valid, though the data are not current. The atlas is out of print, but many of its maps can be purchased separately. Maps that span facing pages in the atlas are printed on one sheet. The maps dated after 1970 are either revisions of original atlas maps, or new maps published in atlas format. The titles of the separate maps are listed here.

  18. Intensity modulated proton therapy.

    PubMed

    Kooy, H M; Grassberger, C

    2015-07-01

    Intensity modulated proton therapy (IMPT) implies the electromagnetic spatial control of well-circumscribed "pencil beams" of protons of variable energy and intensity. Proton pencil beams take advantage of the charged-particle Bragg peak-the characteristic peak of dose at the end of range-combined with the modulation of pencil beam variables to create target-local modulations in dose that achieves the dose objectives. IMPT improves on X-ray intensity modulated beams (intensity modulated radiotherapy or volumetric modulated arc therapy) with dose modulation along the beam axis as well as lateral, in-field, dose modulation. The clinical practice of IMPT further improves the healthy tissue vs target dose differential in comparison with X-rays and thus allows increased target dose with dose reduction elsewhere. In addition, heavy-charged-particle beams allow for the modulation of biological effects, which is of active interest in combination with dose "painting" within a target. The clinical utilization of IMPT is actively pursued but technical, physical and clinical questions remain. Technical questions pertain to control processes for manipulating pencil beams from the creation of the proton beam to delivery within the patient within the accuracy requirement. Physical questions pertain to the interplay between the proton penetration and variations between planned and actual patient anatomical representation and the intrinsic uncertainty in tissue stopping powers (the measure of energy loss per unit distance). Clinical questions remain concerning the impact and management of the technical and physical questions within the context of the daily treatment delivery, the clinical benefit of IMPT and the biological response differential compared with X-rays against which clinical benefit will be judged. It is expected that IMPT will replace other modes of proton field delivery. Proton radiotherapy, since its first practice 50 years ago, always required the highest level of

  19. Intensive Care Unit Psychosis

    PubMed Central

    Monks, Richard C.

    1984-01-01

    Patients who become psychotic in intensive care units are usually suffering from delirium. Underlying causes of delirium such as anxiety, sleep deprivation, sensory deprivation and overload, immobilization, an unfamiliar environment and pain, are often preventable or correctable. Early detection, investigation and treatment may prevent significant mortality and morbidity. The patient/physician relationship is one of the keystones of therapy. More severe cases may require psychopharmacological measures. The psychotic episode is quite distressing to the patient and family; an educative and supportive approach by the family physician may be quite helpful in patient rehabilitation. PMID:21279016

  20. Stress intensity factors

    SciTech Connect

    Erdogan, F.

    1983-12-01

    In this work the concept of the stress intensity factor, the underlying mechanics problem leading to its emergence, and its physical relevance, particularly its relation to fracture mechanics are discussed. The reasons as to why it has become nearly an indispensable tool for studying such important phenomena as brittle fracture and fatigue or corrosion fatigue crack propagation in structural solids are considered. A brief discussion of some of the important methods of solution of elastic crack problems is given. Also, a number of related special mechanics problems are described. 24 references.

  1. Intensity modulated proton therapy

    PubMed Central

    Grassberger, C

    2015-01-01

    Intensity modulated proton therapy (IMPT) implies the electromagnetic spatial control of well-circumscribed “pencil beams” of protons of variable energy and intensity. Proton pencil beams take advantage of the charged-particle Bragg peak—the characteristic peak of dose at the end of range—combined with the modulation of pencil beam variables to create target-local modulations in dose that achieves the dose objectives. IMPT improves on X-ray intensity modulated beams (intensity modulated radiotherapy or volumetric modulated arc therapy) with dose modulation along the beam axis as well as lateral, in-field, dose modulation. The clinical practice of IMPT further improves the healthy tissue vs target dose differential in comparison with X-rays and thus allows increased target dose with dose reduction elsewhere. In addition, heavy-charged-particle beams allow for the modulation of biological effects, which is of active interest in combination with dose “painting” within a target. The clinical utilization of IMPT is actively pursued but technical, physical and clinical questions remain. Technical questions pertain to control processes for manipulating pencil beams from the creation of the proton beam to delivery within the patient within the accuracy requirement. Physical questions pertain to the interplay between the proton penetration and variations between planned and actual patient anatomical representation and the intrinsic uncertainty in tissue stopping powers (the measure of energy loss per unit distance). Clinical questions remain concerning the impact and management of the technical and physical questions within the context of the daily treatment delivery, the clinical benefit of IMPT and the biological response differential compared with X-rays against which clinical benefit will be judged. It is expected that IMPT will replace other modes of proton field delivery. Proton radiotherapy, since its first practice 50 years ago, always required the

  2. Mapping Regional Laryngopharyngeal Mechanoreceptor Response

    PubMed Central

    2014-01-01

    Objectives To map mechanoreceptor response in various regions of the laryngopharynx. Methods Five patients with suspected laryngopharyngeal reflux and six healthy control subjects underwent stimulation of mechanoreceptors in the hypopharynx, interarytenoid area, arytenoids, aryepiglottic folds, and pyriform sinuses. The threshold stimuli evoking sensation and eliciting laryngeal adductor reflex were recorded. Results In controls, an air pulse with 2 mmHg pressure evoked mechanoreceptor response in all regions, except bilateral aryepiglottic folds of one control. In patients, stimulus intensity to elicit mechanoreceptor response ranged between 2 mmHg and 10 mmHg and varied among the regions. Air pulse intensity differed between right and left sides of laryngopharyngeal regions in the majority of patients. Conclusion Laryngopharyngeal mechanoreceptor response was uniform among regions and subjects in the healthy group. Patients with suspected laryngopharyngeal reflux showed inter- and intra-regional variations in mechanoreceptor response. Laryngopharyngeal sensory deficit in patients with suspected laryngopharyngeal reflux is not limited to aryepiglottic folds. PMID:25436053

  3. Google Maps: You Are Here

    ERIC Educational Resources Information Center

    Jacobsen, Mikael

    2008-01-01

    Librarians use online mapping services such as Google Maps, MapQuest, Yahoo Maps, and others to check traffic conditions, find local businesses, and provide directions. However, few libraries are using one of Google Maps most outstanding applications, My Maps, for the creation of enhanced and interactive multimedia maps. My Maps is a simple and…

  4. Map reading tools for map libraries.

    USGS Publications Warehouse

    Greenberg, G.L.

    1982-01-01

    Engineers, navigators and military strategists employ a broad array of mechanical devices to facilitate map use. A larger number of map users such as educators, students, tourists, journalists, historians, politicians, economists and librarians are unaware of the available variety of tools which can be used with maps to increase the speed and efficiency of their application and interpretation. This paper identifies map reading tools such as coordinate readers, protractors, dividers, planimeters, and symbol-templets according to a functional classification. Particularly, arrays of tools are suggested for use in determining position, direction, distance, area and form (perimeter-shape-pattern-relief). -from Author

  5. High intensity proton synchrotrons

    NASA Astrophysics Data System (ADS)

    Craddock, M. K.

    1986-10-01

    Strong initiatives are being pursued in a number of countries for the construction of ``kaon factory'' synchrotrons capable of producing 100 times more intense proton beams than those available now from machines such as the Brookhaven AGS and CERN PS. Such machines would yield equivalent increases in the fluxes of secondary particles (kaons, pions, muons, antiprotons, hyperons and neutrinos of all varieties)—or cleaner beams for a smaller increase in flux—opening new avenues to various fundamental questions in both particle and nuclear physics. Major areas of investigation would be rare decay modes, CP violation, meson and hadron spectroscopy, antinucleon interactions, neutrino scattering and oscillations, and hypernuclear properties. Experience with the pion factories has already shown how high beam intensities make it possible to explore the ``precision frontier'' with results complementary to those achievable at the ``energy frontier''. This paper will describe proposals for upgrading and AGS and for building kaon factories in Canada, Europe, Japan and the United States, emphasizing the novel aspects of accelerator design required to achieve the desired performance (typically 100 μA at 30 GeV).

  6. French intensive truck garden

    SciTech Connect

    Edwards, T D

    1983-01-01

    The French Intensive approach to truck gardening has the potential to provide substantially higher yields and lower per acre costs than do conventional farming techniques. It was the intent of this grant to show that there is the potential to accomplish the gains that the French Intensive method has to offer. It is obvious that locally grown food can greatly reduce transportation energy costs but when there is the consideration of higher efficiencies there will also be energy cost reductions due to lower fertilizer and pesticide useage. As with any farming technique, there is a substantial time interval for complete soil recovery after there have been made substantial soil modifications. There were major crop improvements even though there was such a short time since the soil had been greatly disturbed. It was also the intent of this grant to accomplish two other major objectives: first, the garden was managed under organic techniques which meant that there were no chemical fertilizers or synthetic pesticides to be used. Second, the garden was constructed so that a handicapped person in a wheelchair could manage and have a higher degree of self sufficiency with the garden. As an overall result, I would say that the garden has taken the first step of success and each year should become better.

  7. [Safety of intensive sweeteners].

    PubMed

    Lugasi, Andrea

    2016-04-01

    Nowadays low calorie or intesive sweeteners are getting more and more popular. These sweeteners can be placed to the market and used as food additives according to the recent EU legislation. In the meantime news are coming out one after the other stating that many of these artificial intensive sweeteners can cause cancer - the highest risk has been attributed to aspartam. Low calorie sweeteners, just like all the other additives can be authorized after strickt risk assessment procedure according to the recent food law. Only after the additive has gone through these procedure can be placed to the list of food additives, which contains not only the range of food these additives can be used, but also the recommended highest amount of daily consumption. European Food Safety Authority considering the latest scientific examination results, evaluates regularly the safety of sweeteners authorized earlier. Until now there is no evidence found to question the safety of the authorized intensive sweeteners. Orv. Hetil., 2016, 157(Suppl. 1), 14-28. PMID:27088715

  8. THE 21 cm 'OUTER ARM' AND THE OUTER-GALAXY HIGH-VELOCITY CLOUDS: CONNECTED BY KINEMATICS, METALLICITY, AND DISTANCE

    SciTech Connect

    Tripp, Todd M.; Song Limin

    2012-02-20

    Using high-resolution ultraviolet spectra obtained with the Hubble Space Telescope Space Telescope Imaging Spectrograph and the Far Ultraviolet Spectroscopic Explorer, we study the metallicity, kinematics, and distance of the gaseous 'outer arm' (OA) and the high-velocity clouds (HVCs) in the outer Galaxy. We detect the OA in a variety of absorption lines toward two QSOs, H1821+643 and HS0624+6907. We search for OA absorption toward eight Galactic stars and detect it in one case, which constrains the OA Galactocentric radius to 9 kpc

  9. Development of Ontario ShakeMaps

    NASA Astrophysics Data System (ADS)

    Kaka, Sanlinn Isma'il

    A methodology to generate simple, reliable ShakeMaps showing earthquake ground shaking in Southern Ontario is developed using the near-real-time data from Ontario POLARIS (Portable Observatories for Lithospheric Analysis and Research Investigating Seismicity) stations. ShakeMaps have been implemented in California and the western United States (Wald et al, 1999b), but this is the first ShakeMap development in eastern North America. The eastern ground motion characteristics and sparse network pose new challenges for ShakeMap development in this region. The ground motion parameters selected to display in the near-real-time ShakeMaps include peak ground acceleration (PGA), peak velocity (PGV), Pseudo-acceleration (PSA) amplitude at periods of 0.1s, 0.3s and 1.0s, and an instrumentally derived felt-intensity. The ground motion values are plotted on a map and contour lines are added to show areas of equally-strong shaking. In the ShakeMaps, PGA, PGV, and PSA values are assigned to map grid points by using a combination of the recorded ground motions and values estimated using the empirical relations developed in Chapter 6. Intensity values are estimated from the peak ground velocity using relations developed in Chapter 5, where the intensity is a qualitative measure of the strength of shaking and damage based on the Modified Mercalli scale. A grid of site amplification factors to account for the appropriate level of soil amplification is incorporated, by using interpolations of currently-available site conditions. The site classification is based primarily on the average shear-wave velocity of the top 30 meters (Vs30) wherever possible. Since shear-wave velocity measurements are not available for most grid points, I assume Vs30 =500 m/s for sites with unknown properties. An important component of ShakeMap is its potential use as a rapid earthquake warning system. ShakeMap sends email notifications to subscribers immediately (within 3 minutes) following an earthquake

  10. Variational Phase Imaging Using the Transport-of-Intensity Equation.

    PubMed

    Bostan, Emrah; Froustey, Emmanuel; Nilchian, Masih; Sage, Daniel; Unser, Michael

    2016-02-01

    We introduce a variational phase retrieval algorithm for the imaging of transparent objects. Our formalism is based on the transport-of-intensity equation (TIE), which relates the phase of an optical field to the variation of its intensity along the direction of propagation. TIE practically requires one to record a set of defocus images to measure the variation of intensity. We first investigate the effect of the defocus distance on the retrieved phase map. Based on our analysis, we propose a weighted phase reconstruction algorithm yielding a phase map that minimizes a convex functional. The method is nonlinear and combines different ranges of spatial frequencies - depending on the defocus value of the measurements - in a regularized fashion. The minimization task is solved iteratively via the alternating-direction method of multipliers. Our simulations outperform commonly used linear and nonlinear TIE solvers. We also illustrate and validate our method on real microscopy data of HeLa cells. PMID:26685242

  11. Portable intensity interferometry

    NASA Astrophysics Data System (ADS)

    Horch, Elliott P.; Camarata, Matthew A.

    2012-07-01

    A limitation of the current generation of long baseline optical interferometers is the need to make the light interfere prior to detection. This is unlike the radio regime where signals can be recorded fast enough to use electronics to accomplish the same result. This paper describes a modern optical intensity interferometer based on electronics with picosecond timing resolution. The instrument will allow for portable optical interferometry with much larger baselines than currently possible by using existing large telescopes. With modern electronics, the limiting magnitude of the technique at a 4-m aperture size becomes competitive with some amplitude-based interferometers. The instrumentation will permit a wireless mode of operation with GPS clocking technology, extending the work to extremely large baselines. We discuss the basic observing strategy, a planned observational program at the Lowell Observatory 1.8-m and 1.0-m telescopes, and the science that can realistically be done with this instrumentation.

  12. Intensity dependent spread theory

    NASA Technical Reports Server (NTRS)

    Holben, Richard

    1990-01-01

    The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.

  13. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  14. Mapping the Future of Map Librarianship.

    ERIC Educational Resources Information Center

    Lang, Laura

    1992-01-01

    Discussion of electronic versions of maps focuses on TIGER files (i.e., electronic maps distributed by the U.S. Bureau of the Census) and their manipulation using geographic information system (GIS) technology. Topics addressed include applications of GIS software, projects to improve access to TIGER files, and the role of GIS in libraries. (MES)

  15. An Atlas of ShakeMaps for Selected Global Earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  16. Density Equalizing Map Projections

    SciTech Connect

    Close, E. R.; Merrill, D. W.; Holmes, H. H.

    1995-07-01

    A geographic map is mathematically transformed so that the subareas of the map are proportional to a given quantity such as population. In other words, population density is equalized over the entire map. The transformed map can be used as a display tool, or it can be statistically analyzed. For example, cases of disease plotted on the transformed map should be uniformly distributed at random, if disease rates are everywhere equal. Geographic clusters of disease can be readily identified, and their statistical significance determined, on a density equalized map.

  17. Density Equalizing Map Projections

    1995-07-01

    A geographic map is mathematically transformed so that the subareas of the map are proportional to a given quantity such as population. In other words, population density is equalized over the entire map. The transformed map can be used as a display tool, or it can be statistically analyzed. For example, cases of disease plotted on the transformed map should be uniformly distributed at random, if disease rates are everywhere equal. Geographic clusters of diseasemore » can be readily identified, and their statistical significance determined, on a density equalized map.« less

  18. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  19. Riparian Wetlands: Mapping

    EPA Science Inventory

    Riparian wetlands are critical systems that perform functions and provide services disproportionate to their extent in the landscape. Mapping wetlands allows for better planning, management, and modeling, but riparian wetlands present several challenges to effective mapping due t...

  20. Active Fire Mapping Program

    MedlinePlus

    ... Incidents (Home) New Large Incidents Fire Detection Maps MODIS Satellite Imagery VIIRS Satellite Imagery Fire Detection GIS ... Data Web Services Latest Detected Fire Activity Other MODIS Products Frequently Asked Questions About Active Fire Maps ...

  1. Information-Mapped Chemistry.

    ERIC Educational Resources Information Center

    Olympia, P. L., Jr.

    1979-01-01

    This paper describes the use of information mapping in chemistry and in other related sciences. Information mapping is a way of presenting information without paragraphs and unnecessary transitional phrases. (BB)

  2. Linkage map integration

    SciTech Connect

    Collins, A.; Teague, J.; Morton, N.E.; Keats, B.J.

    1996-08-15

    The algorithms that drive the map+ program for locus-oriented linkage mapping are presented. They depend on the enhanced location database program ldb+ to specify an initial comprehensive map that includes all loci in the summary lod file. Subsequently the map may be edited or order constrained and is automatically improved by estimating the location of each locus conditional on the remainder, beginning with the most discrepant loci. Operating characteristics permit rapid and accurate construction of linkage maps with several hundred loci. The map+ program also performs nondisjunction mapping with tests of nonstandard recombination. We have released map+ on Internet as a source program in the C language together with the location database that now includes the LODSOURCE database. 28 refs., 5 tabs.

  3. Creative Concept Mapping.

    ERIC Educational Resources Information Center

    Brown, David S.

    2002-01-01

    Recommends the use of concept mapping in science teaching and proposes that it be presented as a creative activity. Includes a sample lesson plan of a potato stamp concept mapping activity for astronomy. (DDR)

  4. Using maps in genealogy

    USGS Publications Warehouse

    U.S. Geological Survey

    1994-01-01

    In genealogy, maps are most often used as clues to where public or other records about an ancestor are likely to be found. Searching for maps seldom begins until a newcomer to genealogy has mastered basic genealogical routines

  5. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  6. Intensive training in young athletes.

    PubMed Central

    Maffulli, N; Pintore, E

    1990-01-01

    An increasing number of children take part in organized sporting activities, undergoing intensive training and high level competition from an early age. Although intensive training in children may foster health benefits, many are injured as a result of training, often quite seriously. This paper reviews some of the areas of research dealing with intensively trained young athletes, and focuses on physical, cardiovascular and muscular effects, sports injuries and psychological effects of intensive training. It is concluded that measures should be taken to modify present training and competition schemes to avoid the deleterious effects of intensive physical activity on these children. PMID:2097019

  7. Building Better Volcanic Hazard Maps Through Scientific and Stakeholder Collaboration

    NASA Astrophysics Data System (ADS)

    Thompson, M. A.; Lindsay, J. M.; Calder, E.

    2015-12-01

    All across the world information about natural hazards such as volcanic eruptions, earthquakes and tsunami is shared and communicated using maps that show which locations are potentially exposed to hazards of varying intensities. Unlike earthquakes and tsunami, which typically produce one dominant hazardous phenomenon (ground shaking and inundation, respectively) volcanic eruptions can produce a wide variety of phenomena that range from near-vent (e.g. pyroclastic flows, ground shaking) to distal (e.g. volcanic ash, inundation via tsunami), and that vary in intensity depending on the type and location of the volcano. This complexity poses challenges in depicting volcanic hazard on a map, and to date there has been no consistent approach, with a wide range of hazard maps produced and little evaluation of their relative efficacy. Moreover, in traditional hazard mapping practice, scientists analyse data about a hazard, and then display the results on a map that is then presented to stakeholders. This one-way, top-down approach to hazard communication does not necessarily translate into effective hazard education, or, as tragically demonstrated by Nevado del Ruiz, Columbia in 1985, its use in risk mitigation by civil authorities. Furthermore, messages taken away from a hazard map can be strongly influenced by its visual design. Thus, hazard maps are more likely to be useful, usable and used if relevant stakeholders are engaged during the hazard map process to ensure a) the map is designed in a relevant way and b) the map takes into account how users interpret and read different map features and designs. The IAVCEI Commission on Volcanic Hazards and Risk has recently launched a Hazard Mapping Working Group to collate some of these experiences in graphically depicting volcanic hazard from around the world, including Latin America and the Caribbean, with the aim of preparing some Considerations for Producing Volcanic Hazard Maps that may help map makers in the future.

  8. Partial covariance mapping techniques at FELs

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek

    2014-05-01

    The development of free-electron lasers (FELs) is driven by the desire to access the structure and chemical dynamics of biomolecules with atomic resolution. Short, intense FEL pulses have the potential to record x-ray diffraction images before the molecular structure is destroyed by radiation damage. However, even during the shortest, few-femtosecond pulses currently available, there are some significant changes induced by massive ionisation and onset of Coulomb explosion. To interpret the diffraction images it is vital to gain insight into the electronic and nuclear dynamics during multiple core and valence ionisations that compete with Auger cascades. This paper focuses on a technique that is capable to probe these processes. The covariance mapping technique is well suited to the high intensity and low repetition rate of FEL pulses. While the multitude of charges ejected at each pulse overwhelm conventional coincidence methods, an improved technique of partial covariance mapping can cope with hundreds of photoelectrons or photoions detected at each FEL shot. The technique, however, often reveals spurious, uninteresting correlations that spoil the maps. This work will discuss the strengths and limitations of various forms of covariance mapping techniques. Quantitative information extracted from the maps will be linked to theoretical modelling of ionisation and fragmentation paths. Special attention will be given to critical experimental parameters, such as counting rate, FEL intensity fluctuations, vacuum impurities or detector efficiency and nonlinearities. Methods of assessing and optimising signal-to-noise ratio will be described. Emphasis will be put on possible future developments such as multidimensional covariance mapping, compensation for various experimental instabilities and improvements in the detector response. This work has been supported the EPSRC, UK (grants EP/F021232/1 and EP/I032517/1).

  9. Wetland inundation mapping and change monitoring using landsat and airborne LiDAR data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a new approach for mapping wetland inundation change using Landsat and LiDAR intensity data. In this approach, LiDAR data were used to derive highly accurate reference subpixel inundation percentage (SIP) maps at the 30-m resolution. The reference SIP maps were then used to est...

  10. Intensity Frontier Instrumentation

    SciTech Connect

    Kettell S.; Rameika, R.; Tshirhart, B.

    2013-09-24

    The fundamental origin of flavor in the Standard Model (SM) remains a mystery. Despite the roughly eighty years since Rabi asked “Who ordered that?” upon learning of the discovery of the muon, we have not understood the reason that there are three generations or, more recently, why the quark and neutrino mixing matrices and masses are so different. The solution to the flavor problem would give profound insights into physics beyond the Standard Model (BSM) and tell us about the couplings and the mass scale at which the next level of insight can be found. The SM fails to explain all observed phenomena: new interactions and yet unseen particles must exist. They may manifest themselves by causing SM reactions to differ from often very precise predictions. The Intensity Frontier (1) explores these fundamental questions by searching for new physics in extremely rare processes or those forbidden in the SM. This often requires massive and/or extremely finely tuned detectors.

  11. Emotionally Intense Science Activities

    NASA Astrophysics Data System (ADS)

    King, Donna; Ritchie, Stephen; Sandhu, Maryam; Henderson, Senka

    2015-08-01

    Science activities that evoke positive emotional responses make a difference to students' emotional experience of science. In this study, we explored 8th Grade students' discrete emotions expressed during science activities in a unit on Energy. Multiple data sources including classroom videos, interviews and emotion diaries completed at the end of each lesson were analysed to identify individual student's emotions. Results from two representative students are presented as case studies. Using a theoretical perspective drawn from theories of emotions founded in sociology, two assertions emerged. First, during the demonstration activity, students experienced the emotions of wonder and surprise; second, during a laboratory activity, students experienced the intense positive emotions of happiness/joy. Characteristics of these activities that contributed to students' positive experiences are highlighted. The study found that choosing activities that evoked strong positive emotional experiences, focused students' attention on the phenomenon they were learning, and the activities were recalled positively. Furthermore, such positive experiences may contribute to students' interest and engagement in science and longer term memorability. Finally, implications for science teachers and pre-service teacher education are suggested.

  12. Oil Exploration Mapping

    NASA Technical Reports Server (NTRS)

    1994-01-01

    After concluding an oil exploration agreement with the Republic of Yemen, Chevron International needed detailed geologic and topographic maps of the area. Chevron's remote sensing team used imagery from Landsat and SPOT, combining images into composite views. The project was successfully concluded and resulted in greatly improved base maps and unique topographic maps.

  13. Reading Angles in Maps

    ERIC Educational Resources Information Center

    Izard, Véronique; O'Donnell, Evan; Spelke, Elizabeth S.

    2014-01-01

    Preschool children can navigate by simple geometric maps of the environment, but the nature of the geometric relations they use in map reading remains unclear. Here, children were tested specifically on their sensitivity to angle. Forty-eight children (age 47:15-53:30 months) were presented with fragments of geometric maps, in which angle sections…

  14. Applications of Concept Mapping

    ERIC Educational Resources Information Center

    De Simone, Christina

    2007-01-01

    This article reviews three major uses of the concept-mapping strategies for postsecondary learning: the external representation of concept maps as an external scratch pad to represent major ideas and their organization, the mental construction of concept maps when students are seeking a time-efficient tool, and the electronic construction and…

  15. Mapping Sociological Concepts.

    ERIC Educational Resources Information Center

    Trepagnier, Barbara

    2002-01-01

    Focuses on the use of cognitive mapping within sociology. Describes an assignment where students created a cognitive map that focused on names of theorists and concepts related to them. Discusses sociological imagination in relation to cognitive mapping and the assessment of the assignment. (CMK)

  16. Statistical Mapping by Computer.

    ERIC Educational Resources Information Center

    Utano, Jack J.

    The function of a statistical map is to provide readers with a visual impression of the data so that they may be able to identify any geographic characteristics of the displayed phenomena. The increasingly important role played by the computer in the production of statistical maps is manifested by the varied examples of computer maps in recent…

  17. Using maps in genealogy

    USGS Publications Warehouse

    U.S. Geological Survey

    1999-01-01

    Maps are one of many sources you may need to complete a family tree. In genealogical research, maps can provide clues to where our ancestors may have lived and where to look for written records about them. Beginners should master basic genealogical research techniques before starting to use topographic maps.

  18. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  19. Mapping Hydrogen in the Galaxy, Galactic Halo, and Local Group with ALFA: The GALFA-H I Survey Starting with TOGS

    NASA Astrophysics Data System (ADS)

    Gibson, S. J.; Douglas, K. A.; Heiles, C.; Korpela, E. J.; Peek, J. E. G.; Putman, M. E.; Stanimirović, S.

    2008-08-01

    Radio observations of gas in the Milky Way and Local Group are vital for understanding how galaxies function as systems. The unique sensitivity of Arecibo's 305 m dish, coupled with the 7-beam Arecibo L-Band Feed Array (ALFA), provides an unparalleled tool for investigating the full range of interstellar phenomena traced by the H I 21 cm line. The GALFA (Galactic ALFA) H I Survey is mapping the entire Arecibo sky over a velocity range of -700 to +700 km s-1 with 0.2 km s-1 velocity channels and an angular resolution of 3.4'. We present highlights from the TOGS (Turn On GALFA Survey) portion of GALFA-H I, which is covering thousands of square degrees in commensal drift scan observations with the ALFALFA and AGES extragalactic ALFA surveys. This work is supported in part by the National Astronomy and Ionosphere Center, operated by Cornell University under cooperative agreement with the National Science Foundation.

  20. Mapping Wildfires In Nearly Real Time

    NASA Technical Reports Server (NTRS)

    Nichols, Joseph D.; Parks, Gary S.; Denning, Richard F.; Ibbott, Anthony C.; Scott, Kenneth C.; Sleigh, William J.; Voss, Jeffrey M.

    1993-01-01

    Airborne infrared-sensing system flies over wildfire as infrared detector in system and navigation subsystem generate data transmitted to firefighters' camp. There, data plotted in form of map of fire, including approximate variations of temperature. System, called Firefly, reveals position of fires and approximate thermal intensities of regions within fires. Firefighters use information to manage and suppress fires. Used for other purposes with minor modifications, such as to spot losses of heat in urban areas and to map disease and pest infestation in vegetation.

  1. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  2. Reproducibility of intensity-based estimates of lung ventilation

    PubMed Central

    Du, Kaifang; Bayouth, John E.; Ding, Kai; Christensen, Gary E.; Cao, Kunlin; Reinhardt, Joseph M.

    2013-01-01

    : Higher reproducibility was found for anesthetized mechanically ventilated animals than for the humans for both the intensity-based (IJAC) and transformation-based (TJAC) ventilation estimates. The human IJAC maps had scan-to-scan correlation coefficients of 0.45 ± 0.14, a gamma pass rate 70 ± 8 without normalization and 75 ± 5 with normalization. The human TJAC maps had correlation coefficients 0.81 ± 0.10, a gamma pass rate 86 ± 11 without normalization and 93 ± 4 with normalization. The gamma pass rate and correlation coefficient of the IJAC maps gradually increased with increased smoothing, but were still much lower than those of the TJAC maps. Conclusions: The transformation-based ventilation maps show better reproducibility than the intensity-based maps, especially in human subjects. Reproducibility was also found to depend on variations in respiratory effort; all techniques were better when applied to images from mechanically ventilated sheep compared to spontaneously breathing human subjects. Nevertheless, intensity-based techniques applied to mechanically ventilated sheep were less reproducible than the transformation-based applied to spontaneously breathing humans, suggesting the method used to determine ventilation maps is important. Prefiltering of the CT images may help to improve the reproducibility of the intensity-based ventilation estimates, but even with filtering the reproducibility of the intensity-based ventilation estimates is not as good as that of transformation-based ventilation estimates. PMID:23718615

  3. Linkage Analysis and QTL Mapping Using SNP Dosage Data in a Tetraploid Potato Mapping Population

    PubMed Central

    Hackett, Christine A.; McLean, Karen; Bryan, Glenn J.

    2013-01-01

    New sequencing and genotyping technologies have enabled researchers to generate high density SNP genotype data for mapping populations. In polyploid species, SNP data usually contain a new type of information, the allele dosage, which is not used by current methodologies for linkage analysis and QTL mapping. Here we extend existing methodology to use dosage data on SNPs in an autotetraploid mapping population. The SNP dosages are inferred from allele intensity ratios using normal mixture models. The steps of the linkage analysis (testing for distorted segregation, clustering SNPs, calculation of recombination fractions and LOD scores, ordering of SNPs and inference of parental phase) are extended to use the dosage information. For QTL analysis, the probability of each possible offspring genotype is inferred at a grid of locations along the chromosome from the ordered parental genotypes and phases and the offspring dosages. A normal mixture model is then used to relate trait values to the offspring genotypes and to identify the most likely locations for QTLs. These methods are applied to analyse a tetraploid potato mapping population of parents and 190 offspring, genotyped using an Infinium 8300 Potato SNP Array. Linkage maps for each of the 12 chromosomes are constructed. The allele intensity ratios are mapped as quantitative traits to check that their position and phase agrees with that of the corresponding SNP. This analysis confirms most SNP positions, and eliminates some problem SNPs to give high-density maps for each chromosome, with between 74 and 152 SNPs mapped and between 100 and 300 further SNPs allocated to approximate bins. Low numbers of double reduction products were detected. Overall 3839 of the 5378 polymorphic SNPs can be assigned putative genetic locations. This methodology can be applied to construct high-density linkage maps in any autotetraploid species, and could also be extended to higher autopolyploids. PMID:23704960

  4. Map projections for larger-scale mapping

    NASA Technical Reports Server (NTRS)

    Snyder, J. P.

    1982-01-01

    For the U.S. Geological Survey maps at 1:1,000,000-scale and larger, the most common projections are conformal, such as the Transverse Mercator and Lambert Conformal Conic. Projections for these scales should treat the Earth as an ellipsoid. In addition, the USGS has conceived and designed some new projections, including the Space Oblique Mercator, the first map projection designed to permit low-distortion mapping of the Earth from satellite imagery, continuously following the groundtrack. The USGS has programmed nearly all pertinent projection equations for inverse and forward calculations. These are used to plot maps or to transform coordinates from one projection to another. The projections in current use are described.

  5. Global Map of Epithermal Neutrons

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Observations by NASA's 2001 Mars Odyssey spacecraft show a global view of Mars in intermediate-energy, or epithermal, neutrons. Soil enriched by hydrogen is indicated by the deep blue colors on the map, which show a low intensity of epithermal neutrons. Progressively smaller amounts of hydrogen are shown in the colors light blue, green, yellow and red. The deep blue areas in the polar regions are believed to contain up to 50 percent water ice in the upper one meter (three feet) of the soil. Hydrogen in the far north is hidden at this time beneath a layer of carbon dioxide frost (dry ice). Light blue regions near the equator contain slightly enhanced near-surface hydrogen, which is most likely chemically or physically bound because water ice is not stable near the equator. The view shown here is a map of measurements made during the first three months of mapping using the neutron spectrometer instrument, part of the gamma ray spectrometer instrument suite. The central meridian in this projection is zero degrees longitude. Topographic features are superimposed on the map for geographic reference.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson, and NASA's Johnson Space Center, Houston, operate the science instruments. The gamma-ray spectrometer was provided by the University of Arizona in collaboration with the Russian Aviation and Space Agency, which provided the high-energy neutron detector, and the Los Alamos National Laboratories, New Mexico, which provided the neutron spectrometer. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  6. Sodium Velocity Maps on Mercury

    NASA Technical Reports Server (NTRS)

    Potter, A. E.; Killen, R. M.

    2011-01-01

    The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.

  7. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  8. On genetic map functions

    SciTech Connect

    Zhao, Hongyu; Speed, T.P.

    1996-04-01

    Various genetic map functions have been proposed to infer the unobservable genetic distance between two loci from the observable recombination fraction between them. Some map functions were found to fit data better than others. When there are more than three markers, multilocus recombination probabilities cannot be uniquely determined by the defining property of map functions, and different methods have been proposed to permit the use of map functions to analyze multilocus data. If for a given map function, there is a probability model for recombination that can give rise to it, then joint recombination probabilities can be deduced from this model. This provides another way to use map functions in multilocus analysis. In this paper we show that stationary renewal processes give rise to most of the map functions in the literature. Furthermore, we show that the interevent distributions of these renewal processes can all be approximated quite well by gamma distributions. 43 refs., 4 figs.

  9. Effect of noise intensity and illumination intensity on visual performance.

    PubMed

    Lin, Chin-Chiuan

    2014-10-01

    The results of Experiment 1 indicated that noise and illumination intensity have a significant effect on character identification performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 and 800 lux than at 200 lux. However, the interaction of noise and illumination intensity did not significantly affect visual performance. The results of Experiment 2 indicated that noise and illumination intensity also had a significant effect on reading comprehension performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 lux than at 200 and 800 lux. Furthermore, reading comprehension performance was better at 500 lux lighting and 30 dBA noise than with 800 lux and 90 dBA. High noise intensity impaired visual performance, and visual performance at normal illumination intensity was better than at other illumination intensities. The interaction of noise and illumination had a significant effect on reading comprehension. These results indicate that noise intensity lower than 30 dBA and illumination intensity approximately 500 lux might be the optimal conditions for visual work. PMID:25153619

  10. An arc-sequencing algorithm for intensity modulated arc therapy

    SciTech Connect

    Shepard, D. M.; Cao, D.; Afghan, M. K. N.; Earl, M. A.

    2007-02-15

    Intensity modulated arc therapy (IMAT) is an intensity modulated radiation therapy delivery technique originally proposed as an alternative to tomotherapy. IMAT uses a series of overlapping arcs to deliver optimized intensity patterns from each beam direction. The full potential of IMAT has gone largely unrealized due in part to a lack of robust and commercially available inverse planning tools. To address this, we have implemented an IMAT arc-sequencing algorithm that translates optimized intensity maps into deliverable IMAT plans. The sequencing algorithm uses simulated annealing to simultaneously optimize the aperture shapes and weights throughout each arc. The sequencer enforces the delivery constraints while minimizing the discrepancies between the optimized and sequenced intensity maps. The performance of the algorithm has been tested for ten patient cases (3 prostate, 3 brain, 2 head-and-neck, 1 lung, and 1 pancreas). Seven coplanar IMAT plans were created using an average of 4.6 arcs and 685 monitor units. Additionally, three noncoplanar plans were created using an average of 16 arcs and 498 monitor units. The results demonstrate that the arc sequencer can provide efficient and highly conformal IMAT plans. An average sequencing time of approximately 20 min was observed.

  11. MPEG-4 AVC saliency map computation

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.

    2014-02-01

    A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.

  12. Intensity attenuation in the Pannonian Basin

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Szanyi, Gyöngyvér

    2015-04-01

    Ground motion prediction equations play a key role in seismic hazard assessment. Earthquake hazard has to be expressed in macroseismic intensities in case of seismic risk estimations where a direct relation to the damage associated with ground shaking is needed. It can be also necessary for shake map generation where the map is used for prompt notification to the public, disaster management officers and insurance companies. Although only few instrumental strong motion data are recorded in the Pannonian Basin, there are numerous historical reports of past earthquakes since the 1763 Komárom earthquake. Knowing the intensity attenuation and comparing them with relations of other areas - where instrumental strong motion data also exist - can help us to choose from the existing instrumental ground motion prediction equations. The aim of this work is to determine an intensity attenuation formula for the inner part of the Pannonian Basin, which can be further used to find an adaptable ground motion prediction equation for the area. The crust below the Pannonian Basin is thin and warm and it is overlain by thick sediments. Thus the attenuation of seismic waves here is different from the attenuation in the Alp-Carpathian mountain belt. Therefore we have collected intensity data only from the inner part of the Pannonian Basin and defined the boundaries of the studied area by the crust thickness of 30 km (Windhoffer et al., 2005). 90 earthquakes from 1763 until 2014 have sufficient number of macroseismic data. Magnitude of the events varies from 3.0 to 6.6. We have used individual intensity points to eliminate the subjectivity of drawing isoseismals, the number of available intensity data is more than 3000. Careful quality control has been made on the dataset. The different types of magnitudes of the used earthquake catalogue have been converted to local and momentum magnitudes using relations determined for the Pannonian Basin. We applied the attenuation formula by Sorensen

  13. SMOS sea surface salinity maps of the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Gabarro, Carolina; Olmedo, Estrella; Turiel, Antonio; Ballabrera-Poy, Joaquim; Martinez, Justino; Portabella, Marcos

    2016-04-01

    Salinity and temperature gradients drive the thermohaline circulation of the oceans, and play a key role in the ocean-atmosphere coupling. The strong and direct interactions between the ocean and the cryosphere (primarily through sea ice and ice shelves) is also a key ingredient of the thermohaline circulation. The ESA's Soil Moisture and Ocean Salinity (SMOS) mission, launched in 2009, has the objective measuring soil moisture over the continents and sea surface salinity over the oceans. Although the mission was originally conceived for hydrological and oceanographic studies [1], SMOS is also making inroads in the cryospheric monitoring. SMOS carries an innovative L-band (1.4 GHz, or 21-cm wavelength), passive interferometric radiometer (the so-called MIRAS) that measures the electromagnetic radiation emitted by the Earth's surface, at about 50 km spatial resolution wide swath (1200-km), and with a 3-day revisit time at the equator, but a more frequent one at the poles. Although the SMOS radiometer operating frequency offers almost the maximum sensitivity of the brightness temperature (TB) to sea surface salinity (SSS) variations, this is rather low, , i.e.,: 90% of ocean SSS values span a range of brightness temperatures of only 5K at L-band. This sensitivity is particularly low in cold waters. This implies that the SSS retrieval requires high radiometric performance. Since the SMOS launch, SSS Level 3 maps have been distributed by several expert laboratories including the Barcelona Expert Centre (BEC). However, since the TB sensitivity to SSS decreases with decreasing sea surface temperature (SST), large retrieval errors had been reported when retrieving salinity values at latitudes above 50⁰N. Two new processing algorithms, recently developed at BEC, have led to a considerable improvement of the SMOS data, allowing for the first time to derive SSS maps in cold waters. The first one is to empirically characterize and correct the systematic biases with six

  14. Relationships between peak ground acceleration, peak ground velocity, and modified mercalli intensity in California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.

    1999-01-01

    We have developed regression relationships between Modified Mercalli Intensity (Imm) and peak ground acceleration (PGA) and velocity (PGV) by comparing horizontal peak ground motions to observed intensities for eight significant California earthquakes. For the limited range of Modified Mercalli intensities (Imm), we find that for peak acceleration with V ??? Imm ??? VIII, Imm = 3.66 log(PGA) - 1.66, and for peak velocity with V ??? Imm ??? IX, Imm = 3.47 log(PGV) + 2.35. From comparison with observed intensity maps, we find that a combined regression based on peak velocity for intensity > VII and on peak acceleration for intensity < VII is most suitable for reproducing observed Imm patterns, consistent with high intensities being related to damage (proportional to ground velocity) and with lower intensities determined by felt accounts (most sensitive to higher-frequency ground acceleration). These new Imm relationships are significantly different from the Trifunac and Brady (1975) correlations, which have been used extensively in loss estimation.

  15. Macroseismic Intensities from the 2015 Gorkha, Nepal, Earthquake

    NASA Astrophysics Data System (ADS)

    Martin, S. S.; Hough, S. E.; Gahalaut, V. K.; Hung, C.

    2015-12-01

    The Mw 7.8 Gorkha, Nepal, earthquake, the largest central Himalayan earthquake in eighty-one years, yielded few instrumental recording of strong motion. To supplement these we collected 3800 detailed media and first-person accounts of macroseismic effects that