Science.gov

Sample records for 21-cm intensity mapping

  1. Intensity Mapping During Reionization: 21 cm and Cross-correlations

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; HERA Collaboration

    2016-01-01

    The first generation of 21 cm epoch of reionization (EoR) experiments are now reaching the sensitivities necessary for a detection of the power spectrum of plausible reionization models, and with the advent of next-generation capabilities (e.g. the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometer Array Phase I Low) will move beyond the power spectrum to imaging of the EoR intergalactic medium. Such datasets provide context to galaxy evolution studies for the earliest galaxies on scales of tens of Mpc, but at present wide, deep galaxy surveys are lacking, and attaining the depth to survey the bulk of galaxies responsible for reionization will be challenging even for JWST. Thus we seek useful cross-correlations with other more direct tracers of the galaxy population. I review near-term prospects for cross-correlation studies with 21 cm and CO and CII emission, as well as future far-infrared misions suchas CALISTO.

  2. Baryonic acoustic oscillations from 21 cm intensity mapping: the Square Kilometre Array case

    NASA Astrophysics Data System (ADS)

    Villaescusa-Navarro, Francisco; Alonso, David; Viel, Matteo

    2017-04-01

    We quantitatively investigate the possibility of detecting baryonic acoustic oscillations (BAO) using single-dish 21 cm intensity mapping observations in the post-reionization era. We show that the telescope beam smears out the isotropic BAO signature and, in the case of the Square Kilometre Array (SKA) instrument, makes it undetectable at redshifts z ≳ 1. We however demonstrate that the BAO peak can still be detected in the radial 21 cm power spectrum and describe a method to make this type of measurements. By means of numerical simulations, containing the 21 cm cosmological signal as well as the most relevant Galactic and extra-Galactic foregrounds and basic instrumental effect, we quantify the precision with which the radial BAO scale can be measured in the 21 cm power spectrum. We systematically investigate the signal to noise and the precision of the recovered BAO signal as a function of cosmic variance, instrumental noise, angular resolution and foreground contamination. We find that the expected noise levels of SKA would degrade the final BAO errors by ∼5 per cent with respect to the cosmic-variance limited case at low redshifts, but that the effect grows up to ∼65 per cent at z ∼ 2-3. Furthermore, we find that the radial BAO signature is robust against foreground systematics, and that the main effect is an increase of ∼20 per cent in the final uncertainty on the standard ruler caused by the contribution of foreground residuals as well as the reduction in sky area needed to avoid high-foreground regions. We also find that it should be possible to detect the radial BAO signature with high significance in the full redshift range. We conclude that a 21 cm experiment carried out by the SKA should be able to make direct measurements of the expansion rate H(z) with measure the expansion with competitive per cent level precision on redshifts z ≲ 2.5.

  3. Intensity Mapping with Carbon Monoxide Emission Lines and the Redshifted 21 cm Line

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Furlanetto, Steven R.; Oh, S. Peng; Aguirre, James; Chang, Tzu-Ching; Doré, Olivier; Pritchard, Jonathan R.

    2011-11-01

    We quantify the prospects for using emission lines from rotational transitions of the CO molecule to perform an "intensity mapping" observation at high redshift during the Epoch of Reionization (EoR). The aim of CO intensity mapping is to observe the combined CO emission from many unresolved galaxies, to measure the spatial fluctuations in this emission, and to use this as a tracer of large-scale structure at very early times in the history of our universe. This measurement would help determine the properties of molecular clouds—the sites of star formation—in the very galaxies that reionize the universe. We further consider the possibility of cross-correlating CO intensity maps with future observations of the redshifted 21 cm line. The cross spectrum is less sensitive to foreground contamination than the auto power spectra, and can therefore help confirm the high-redshift origin of each signal. Furthermore, the cross spectrum measurement would help extract key information about the EoR, especially regarding the size distribution of ionized regions. We discuss uncertainties in predicting the CO signal at high redshift, and discuss strategies for improving these predictions. Under favorable assumptions and feasible specifications for a CO survey mapping the CO(2-1) and CO(1-0) lines, the power spectrum of CO emission fluctuations and its cross spectrum with future 21 cm measurements from the Murchison Widefield Array are detectable at high significance.

  4. Prospects of probing quintessence with H I 21-cm intensity mapping survey

    NASA Astrophysics Data System (ADS)

    Hussain, Azam; Thakur, Shruti; Guha Sarkar, Tapomoy; Sen, Anjan A.

    2016-12-01

    We investigate the prospect of constraining scalar field dark energy models using H I 21-cm intensity mapping surveys. We consider a wide class of coupled scalar field dark energy models whose predictions about the background cosmological evolution are different from the Λ cold dark matter (ΛCDM) predictions by a few per cent. We find that these models can be statistically distinguished from ΛCDM through their imprint on the 21-cm angular power spectrum. At the fiducial z = 1.5, corresponding to a radio interferometric observation of the post-reionization H I 21-cm observation at frequency 568 MHz, these models can in fact be distinguished from the ΛCDM model at signal-to-noise ratio >3σ level using a 10 000 h radio observation distributed over 40 pointings of a SKA1-mid-like radio telescope. We also show that tracker models are more likely to be ruled out in comparison with ΛCDM than the thawer models. Future radio observations can be instrumental in obtaining tighter constraints on the parameter space of dark energy models and supplement the bounds obtained from background studies.

  5. Measuring cosmic velocities with 21 cm intensity mapping and galaxy redshift survey cross-correlation dipoles

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Bonvin, Camille

    2017-02-01

    We investigate the feasibility of measuring the effects of peculiar velocities in large-scale structure using the dipole of the redshift-space cross-correlation function. We combine number counts of galaxies with brightness-temperature fluctuations from 21 cm intensity mapping, demonstrating that the dipole may be measured at modest significance (≲2 σ ) by combining the upcoming radio survey Canadian Hydrogen Intensity Mapping Experiment with the future redshift surveys of Dark Energy Spectroscopic Instrument (DESI) and Euclid. More significant measurements (≲10 σ ) will be possible by combining intensity maps from the Square Kilometre Array (SKA) with these of DESI or Euclid, and an even higher significance measurement (≲100 σ ) may be made by combining observables completely internally to the SKA. We account for effects such as contamination by wide-angle terms, interferometer noise and beams in the intensity maps, nonlinear enhancements to the power spectrum, stacking multiple populations, sensitivity to the magnification slope, and the possibility that number counts and intensity maps probe the same tracers. We also derive a new expression for the covariance matrix of multitracer redshift-space correlation function estimators with arbitrary orientation weights, which may be useful for upcoming surveys aiming at measuring redshift-space clustering with multiple tracers.

  6. An intensity map of hydrogen 21-cm emission at redshift z approximately 0.8.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Bandura, Kevin; Peterson, Jeffrey B

    2010-07-22

    Observations of 21-cm radio emission by neutral hydrogen at redshifts z approximately 0.5 to approximately 2.5 are expected to provide a sensitive probe of cosmic dark energy. This is particularly true around the onset of acceleration at z approximately 1, where traditional optical cosmology becomes very difficult because of the infrared opacity of the atmosphere. Hitherto, 21-cm emission has been detected only to z = 0.24. More distant galaxies generally are too faint for individual detections but it is possible to measure the aggregate emission from many unresolved galaxies in the 'cosmic web'. Here we report a three-dimensional 21-cm intensity field at z = 0.53 to 1.12. We then co-add neutral-hydrogen (H i) emission from the volumes surrounding about 10,000 galaxies (from the DEEP2 optical galaxy redshift survey). We detect the aggregate 21-cm glow at a significance of approximately 4sigma.

  7. 21 cm intensity mapping with the Five hundred metre Aperture Spherical Telescope

    NASA Astrophysics Data System (ADS)

    Smoot, George F.; Debono, Ivan

    2017-01-01

    This paper describes a programme to map large-scale cosmic structures on the largest possible scales by using the Five hundred metre Aperture Spherical Telescope (FAST) to make a 21 cm (red-shifted) intensity map of the sky for the range 0.5 < z < 2.5. The goal is to map to the angular and spectral resolution of FAST a large swath of the sky by simple drift scans with a transverse set of beams. This approach would be complementary to galaxy surveys and could be completed before the Square Kilometre Array (SKA) could begin a more detailed and precise effort. The science would be to measure the large-scale structure on the size of the baryon acoustic oscillations and larger scale, and the results would be complementary to its contemporary observations and significant. The survey would be uniquely sensitive to the potential very large-scale features from inflation at the Grand Unified Theory (GUT) scale and complementary to observations of the cosmic microwave background.

  8. Warm dark matter signatures on the 21cm power spectrum: intensity mapping forecasts for SKA

    SciTech Connect

    Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo; Lapi, Andrea E-mail: villaescusa@oats.inaf.it E-mail: lapi@sissa.it

    2015-07-01

    We investigate the impact that warm dark matter (WDM) has in terms of 21 cm intensity mapping in the post-reionization Universe at z=3−5. We perform hydrodynamic simulations for 5 different models: cold dark matter and WDM with 1,2,3,4 keV (thermal relic) mass and assign the neutral hydrogen a-posteriori using two different methods that both reproduce observations in terms of column density distribution function of neutral hydrogen systems. Contrary to naive expectations, the suppression of power present in the linear and non-linear matter power spectra, results in an increase of power in terms of neutral hydrogen and 21 cm power spectra. This is due to the fact that there is a lack of small mass halos in WDM models with respect to cold dark matter: in order to distribute a total amount of neutral hydrogen within the two cosmological models, a larger quantity has to be placed in the most massive halos, that are more biased compared to the cold dark matter cosmology. We quantify this effect and address significance for the telescope SKA1-LOW, including a realistic noise modeling. The results indicate that we will be able to rule out a 4 keV WDM model with 5000 hours of observations at z>3, with a statistical significance of >3 σ, while a smaller mass of 3 keV, comparable to present day constraints, can be ruled out at more than 2 σ confidence level with 1000 hours of observations at z>5.

  9. Warm dark matter signatures on the 21cm power spectrum: intensity mapping forecasts for SKA

    NASA Astrophysics Data System (ADS)

    Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo; Lapi, Andrea

    2015-07-01

    We investigate the impact that warm dark matter (WDM) has in terms of 21 cm intensity mapping in the post-reionization Universe at z=3-5. We perform hydrodynamic simulations for 5 different models: cold dark matter and WDM with 1,2,3,4 keV (thermal relic) mass and assign the neutral hydrogen a-posteriori using two different methods that both reproduce observations in terms of column density distribution function of neutral hydrogen systems. Contrary to naive expectations, the suppression of power present in the linear and non-linear matter power spectra, results in an increase of power in terms of neutral hydrogen and 21 cm power spectra. This is due to the fact that there is a lack of small mass halos in WDM models with respect to cold dark matter: in order to distribute a total amount of neutral hydrogen within the two cosmological models, a larger quantity has to be placed in the most massive halos, that are more biased compared to the cold dark matter cosmology. We quantify this effect and address significance for the telescope SKA1-LOW, including a realistic noise modeling. The results indicate that we will be able to rule out a 4 keV WDM model with 5000 hours of observations at z>3, with a statistical significance of >3 σ, while a smaller mass of 3 keV, comparable to present day constraints, can be ruled out at more than 2 σ confidence level with 1000 hours of observations at z>5.

  10. Cross-correlating 21cm intensity maps with Lyman Break Galaxies in the post-reionization era

    SciTech Connect

    Villaescusa-Navarro, Francisco; Viel, Matteo; Alonso, David; Datta, Kanan K.; Santos, Mário G. E-mail: viel@oats.inaf.it E-mail: kanan@ncra.tifr.res.in E-mail: mgrsantos@uwc.ac.za

    2015-03-01

    We investigate the cross-correlation between the spatial distribution of Lyman Break Galaxies (LBGs) and the 21cm intensity mapping signal at z∼[3–5]. At these redshifts, galactic feedback is supposed to only marginally affect the matter power spectrum, and the neutral hydrogen distribution is independently constrained by quasar spectra. Using a high resolution N-body simulation, populated with neutral hydrogen a posteriori, we forecast for the expected LBG-21cm cross-spectrum and its error for a 21cm field observed by the Square Kilometre Array (SKA1-LOW and SKA1-MID), combined with a spectroscopic LBG survey with the same volume. The cross power can be detected with a signal-to-noise ratio (SNR) up to ∼10 times higher (and down to ∼ 4 times smaller scales) than the 21cm auto-spectrum for this set-up, with the SNR depending only very weakly on redshift and the LBG population. We also show that while both the 21cm auto- and LBG-21cm cross-spectra can be reliably recovered after the cleaning of smooth-spectrum foreground contamination, only the cross-power is robust to problematic non-smooth foregrounds like polarized synchrotron emission.

  11. Cross-correlating 21cm intensity maps with Lyman Break Galaxies in the post-reionization era

    NASA Astrophysics Data System (ADS)

    Villaescusa-Navarro, Francisco; Viel, Matteo; Alonso, David; Datta, Kanan K.; Bull, Philip; Santos, Mário G.

    2015-03-01

    We investigate the cross-correlation between the spatial distribution of Lyman Break Galaxies (LBGs) and the 21cm intensity mapping signal at z~[3-5]. At these redshifts, galactic feedback is supposed to only marginally affect the matter power spectrum, and the neutral hydrogen distribution is independently constrained by quasar spectra. Using a high resolution N-body simulation, populated with neutral hydrogen a posteriori, we forecast for the expected LBG-21cm cross-spectrum and its error for a 21cm field observed by the Square Kilometre Array (SKA1-LOW and SKA1-MID), combined with a spectroscopic LBG survey with the same volume. The cross power can be detected with a signal-to-noise ratio (SNR) up to ~10 times higher (and down to ~ 4 times smaller scales) than the 21cm auto-spectrum for this set-up, with the SNR depending only very weakly on redshift and the LBG population. We also show that while both the 21cm auto- and LBG-21cm cross-spectra can be reliably recovered after the cleaning of smooth-spectrum foreground contamination, only the cross-power is robust to problematic non-smooth foregrounds like polarized synchrotron emission.

  12. Cosmology on ultralarge scales with intensity mapping of the neutral hydrogen 21 cm emission: limits on primordial non-Gaussianity.

    PubMed

    Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís

    2013-10-25

    The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.

  13. The cross-correlation between 21 cm intensity mapping maps and the Lyα forest in the post-reionization era

    NASA Astrophysics Data System (ADS)

    Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo

    2017-04-01

    We investigate the cross-correlation signal between 21cm intensity mapping maps and the Lyα forest in the fully non-linear regime using state-of-the-art hydrodynamic simulations. The cross-correlation signal between the Lyα forest and 21cm maps can provide a coherent and comprehensive picture of the neutral hydrogen (HI) content of our Universe in the post-reionization era, probing both its mass content and volume distribution. We compute the auto-power spectra of both fields together with their cross-power spectrum at z = 2.4 and find that on large scales the fields are completely anti-correlated. This anti-correlation arises because regions with high (low) 21cm emission, such as those with a large (low) concentration of damped Lyα systems, will show up as regions with low (high) transmitted flux. We find that on scales smaller than k simeq 0.2 hMpc‑1 the cross-correlation coefficient departs from ‑1, at a scale where non-linearities show up. We use the anisotropy of the power spectra in redshift-space to determine the values of the bias and of the redshift-space distortion parameters of both fields. We find that the errors on the value of the cosmological and astrophysical parameters could decrease by 30% when adding data from the cross-power spectrum, in a conservative analysis. Our results point out that linear theory is capable of reproducing the shape and amplitude of the cross-power up to rather non-linear scales. Finally, we find that the 21cm-Lyα cross-power spectrum can be detected by combining data from a BOSS-like survey together with 21cm intensity mapping observations by SKA1-MID with a S/N ratio higher than 3 in kin[0.06,1] hMpc‑1. We emphasize that while the shape and amplitude of the 21cm auto-power spectrum can be severely affected by residual foreground contamination, cross-power spectra will be less sensitive to that and therefore can be used to identify systematics in the 21cm maps.

  14. Mapping Cosmic Structure Using 21-cm Hydrogen Signal at Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; GBT 21-cm Intensity Mapping Group

    2011-05-01

    We are using the Green Bank Telescope to make 21-cm intensity maps of cosmic structure in a 0.15 Gpc^3 box at redshift of z 1. The intensity mapping technique combines the flux from many galaxies in each pixel, allowing much greater mapping speed than the traditional redshift survey. Measurement is being made at z 1 to take advantage of a window in frequency around 700 MHz where terrestrial radio frequency interference (RFI) is currently at a minimum. This minimum is due to a reallocation of this frequency band from analog television to wide area wireless internet and public service usage. We will report progress of our attempt to detect autocorrelation of the 21-cm signal. The ultimate goal of this mapping is to use Baryon Acoustic Oscillations to provide more precise constraints to dark energy models.

  15. 21 cm maps of Jupiter's radiation belts from all rotational aspects

    NASA Technical Reports Server (NTRS)

    De Pater, I.

    1980-01-01

    Two-dimensional maps of the radio emission from Jupiter were made in December 1977 at a frequency of 1,412 MHz using the Westerbork telescope. Pictures in all four Stokes parameters have been obtained every 15 deg in longitude, each smeared over 15 deg of the planet's rotation. The maps have an E-W resolution of about 1/3 of the diameter of the disk and a N-S resolution 3 times less. The total intensity and linear polarization maps are accurate to 0.5% and the circularly polarized maps to 0.1% of the maximum intensities in I. The whole set of maps clearly shows the existence of higher order terms in the magnetic field of Jupiter.

  16. MAPPING THE DYNAMICS OF COLD GAS AROUND SGR A* THROUGH 21 cm ABSORPTION

    SciTech Connect

    Christian, Pierre; Loeb, Abraham

    2015-11-20

    The presence of a circumnuclear stellar disk around Sgr A* and megamaser systems near other black holes indicates that dense neutral disks can be found in galactic nuclei. We show that depending on their inclination angle, optical depth, and spin temperature, these disks could be observed spectroscopically through 21 cm absorption. Related spectroscopic observations of Sgr A* can determine its HI disk parameters and the possible presence of gaps in the disk. Clumps of dense gas similar to the G2 could could also be detected in 21 cm absorption against Sgr A* radio emission.

  17. From Enormous 3D Maps of the Universe to Astrophysical and Cosmological Constraints: Statistical Tools for Realizing the Promise of 21 cm Cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Tegmark, Max

    2015-01-01

    21 cm cosmology promises to provide an exquisite probe of astrophysics and cosmology during the cosmic dark ages and the epoch of reionization. An enormous volume of the universe, previously inaccessible, can be directly mapped by looking for the faint signal from hyperfine transition of neutral hydrogen. One day, 21 cm tomography could even eclipse the CMB as the most precise test of our cosmological models. Realizing that promise, however, has proven extremely challenging. We're looking for a small signal buried under foregrounds orders of magnitude stronger. We know that we're going to need very sensitive, and thus very large, low frequency interferometers. Those large interferometers produce vast quantities data, which must be carefully analyzed. In talk, I will present my Ph.D. work at MIT on the development and application of rigorous, fast, and robust statistical tools for extracting that cosmological signal while maintaining a thorough understanding of the error properties of those measurements. These tools reduce vast quanities of interferometric data into the statistics like the power spectrum that can be directly compared with theory and simulation, all while minimizing the amount of cosmological information lost. I will also present results from applying those techniques to data from the the Murchison Widefield Array and will discuss the exciting science they will enable with the upcoming Hydrogen Epoch of Reionization Array.

  18. Mapmaking for precision 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Tegmark, Max; Liu, Adrian; Ewall-Wice, Aaron; Hewitt, Jacqueline N.; Morales, Miguel F.; Neben, Abraham R.; Parsons, Aaron R.; Zheng, Haoxuan

    2015-01-01

    In order to study the "Cosmic Dawn" and the Epoch of Reionization with 21 cm tomography, we need to statistically separate the cosmological signal from foregrounds known to be orders of magnitude brighter. Over the last few years, we have learned much about the role our telescopes play in creating a putatively foreground-free region called the "EoR window." In this work, we examine how an interferometer's effects can be taken into account in a way that allows for the rigorous estimation of 21 cm power spectra from interferometric maps while mitigating foreground contamination and thus increasing sensitivity. This requires a precise understanding of the statistical relationship between the maps we make and the underlying true sky. While some of these calculations would be computationally infeasible if performed exactly, we explore several well-controlled approximations that make mapmaking and the calculation of map statistics much faster, especially for compact and highly redundant interferometers designed specifically for 21 cm cosmology. We demonstrate the utility of these methods and the parametrized trade-offs between accuracy and speed using one such telescope, the upcoming Hydrogen Epoch of Reionization Array, as a case study.

  19. The 21-cm Signal from the cosmological epoch of recombination

    SciTech Connect

    Fialkov, A.; Loeb, A. E-mail: aloeb@cfa.harvard.edu

    2013-11-01

    The redshifted 21-cm emission by neutral hydrogen offers a unique tool for mapping structure formation in the early universe in three dimensions. Here we provide the first detailed calculation of the 21-cm emission signal during and after the epoch of hydrogen recombination in the redshift range of z ∼ 500–1,100, corresponding to observed wavelengths of 100–230 meters. The 21-cm line deviates from thermal equilibrium with the cosmic microwave background (CMB) due to the excess Lyα radiation from hydrogen and helium recombinations. The resulting 21-cm signal reaches a brightness temperature of a milli-Kelvin, orders of magnitude larger than previously estimated. Its detection by a future lunar or space-based observatory could improve dramatically the statistical constraints on the cosmological initial conditions compared to existing two-dimensional maps of the CMB anisotropies.

  20. BAYESIAN SEMI-BLIND COMPONENT SEPARATION FOR FOREGROUND REMOVAL IN INTERFEROMETRIC 21 cm OBSERVATIONS

    SciTech Connect

    Zhang, Le; Timbie, Peter T.; Bunn, Emory F.; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, P. M.; Wandelt, Benjamin D.

    2016-01-15

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.

  1. Probing lepton asymmetry with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2014-09-01

    We investigate the issue of how accurately we can constrain the lepton number asymmetry ξ{sub ν}=μ{sub ν}/T{sub ν} in the Universe by using future observations of 21 cm line fluctuations and cosmic microwave background (CMB). We find that combinations of the 21 cm line and the CMB observations can constrain the lepton asymmetry better than big-bang nucleosynthesis (BBN). Additionally, we also discuss constraints on ξ{sub ν} in the presence of some extra radiation, and show that the 21 cm line observations can substantially improve the constraints obtained by CMB alone, and allow us to distinguish the effects of the lepton asymmetry from the ones of extra radiation.

  2. Redshifted HI 21-cm Signal from the Post-Reionization Epoch: Cross-Correlations with Other Cosmological Probes

    NASA Astrophysics Data System (ADS)

    Sarkar, T. Guha; Datta, K. K.; Pal, A. K.; Choudhury, T. Roy; Bharadwaj, S.

    2016-12-01

    Tomographic intensity mapping of the HI using the redshifted 21-cm observations opens up a new window towards our understanding of cosmological background evolution and structure formation. This is a key science goal of several upcoming radio telescopes including the Square Kilometer Array (SKA). In this article, we focus on the post-reionization signal and investigate the cross correlating of the 21-cm signal with other tracers of the large scale structure. We consider the cross-correlation of the post-reionization 21-cm signal with the Lyman- α forest, Lyman-break galaxies and late time anisotropies in the CMBR maps like weak lensing and the integrated Sachs Wolfe effect. We study the feasibility of detecting the signal and explore the possibility of obtaining constraints on cosmological models using it.

  3. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  4. Data Simulation for 21 cm Cosmology Experiments

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan

    2017-01-01

    21 cm cosmologists seek a measurement of the hyperfine line of neutral hydrogen from very high redshifts. While this signal has the potential to provide an unprecedented view into the early universe, it is also buried under exceedingly bright foreground emission. Over the last several years, 21 cm cosmology research has led to an improved understanding of how low frequency radio interferometers will affect the separation of cosmological signal from foregrounds. This talk will describe new efforts to incorporate this understanding into simulations of the most realistic data sets for the Precision Array for Probing the Epoch of Reionization (PAPER), the Murchison Widefield Array (MWA), and the Hydrogen Epoch of Reionization Array (HERA). These high fidelity simulations are essential for robust algorithm design and validation of early results from these experiments.

  5. Detailed modelling of the 21-cm forest

    NASA Astrophysics Data System (ADS)

    Semelin, B.

    2016-01-01

    The 21-cm forest is a promising probe of the Epoch of Reionization. The local state of the intergalactic medium (IGM) is encoded in the spectrum of a background source (radio-loud quasars or gamma-ray burst afterglow) by absorption at the local 21-cm wavelength, resulting in a continuous and fluctuating absorption level. Small-scale structures (filaments and minihaloes) in the IGM are responsible for the strongest absorption features. The absorption can also be modulated on large scales by inhomogeneous heating and Wouthuysen-Field coupling. We present the results from a simulation that attempts to preserve the cosmological environment while resolving some of the small-scale structures (a few kpc resolution in a 50 h-1 Mpc box). The simulation couples the dynamics and the ionizing radiative transfer and includes X-ray and Lyman lines radiative transfer for a detailed physical modelling. As a result we find that soft X-ray self-shielding, Ly α self-shielding and shock heating all have an impact on the predicted values of the 21-cm optical depth of moderately overdense structures like filaments. A correct treatment of the peculiar velocities is also critical. Modelling these processes seems necessary for accurate predictions and can be done only at high enough resolution. As a result, based on our fiducial model, we estimate that LOFAR should be able to detect a few (strong) absorptions features in a frequency range of a few tens of MHz for a 20 mJy source located at z = 10, while the SKA would extract a large fraction of the absorption information for the same source.

  6. Baryon acoustic oscillation intensity mapping of dark energy.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B; McDonald, Patrick

    2008-03-07

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called "dark energy." To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 10(9) individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  7. Baryon Acoustic Oscillation Intensity Mapping of Dark Energy

    NASA Astrophysics Data System (ADS)

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B.; McDonald, Patrick

    2008-03-01

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called “dark energy.” To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 109 individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  8. Lensing of 21-cm fluctuations by primordial gravitational waves.

    PubMed

    Book, Laura; Kamionkowski, Marc; Schmidt, Fabian

    2012-05-25

    Weak-gravitational-lensing distortions to the intensity pattern of 21-cm radiation from the dark ages can be decomposed geometrically into curl and curl-free components. Lensing by primordial gravitational waves induces a curl component, while the contribution from lensing by density fluctuations is strongly suppressed. Angular fluctuations in the 21-cm background extend to very small angular scales, and measurements at different frequencies probe different shells in redshift space. There is thus a huge trove of information with which to reconstruct the curl component of the lensing field, allowing tensor-to-scalar ratios conceivably as small as r~10(-9)-far smaller than those currently accessible-to be probed.

  9. Redundant Array Configurations for 21 cm Cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Parsons, Aaron R.

    2016-08-01

    Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.

  10. How accurately can 21cm tomography constrain cosmology?

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 621 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  11. Identifying Ionized Regions in Noisy Redshifted 21 cm Data Sets

    NASA Astrophysics Data System (ADS)

    Malloy, Matthew; Lidz, Adam

    2013-04-01

    One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will instead focus on statistical measurements. Here, we show that it may nonetheless be possible to directly identify ionized regions in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signal during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which ~20% of the volume of the universe is neutral at z ~ 7, we find that a 500-tile MWA may directly identify as many as ~150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.

  12. The cross correlation between the 21-cm radiation and the CMB lensing field: a new cosmological signal

    SciTech Connect

    Vallinotto, Alberto

    2011-01-01

    The measurement of Baryon Acoustic Oscillations through the 21-cm intensity mapping technique at redshift z {<=} 4 has the potential to tightly constrain the evolution of dark energy. Crucial to this experimental effort is the determination of the biasing relation connecting fluctuations in the density of neutral hydrogen (HI) with the ones of the underlying dark matter field. In this work I show how the HI bias relevant to these 21-cm intensity mapping experiments can successfully be measured by cross-correlating their signal with the lensing signal obtained from CMB observations. In particular I show that combining CMB lensing maps from Planck with 21-cm field measurements carried out with an instrument similar to the Cylindrical Radio Telescope, this cross-correlation signal can be detected with a signal-to-noise (S/N) ratio of more than 5. Breaking down the signal arising from different redshift bins of thickness {Delta}z = 0.1, this signal leads to constraining the large scale neutral hydrogen bias and its evolution to 4{sigma} level.

  13. Discovery and First Observations of the 21-cm Hydrogen Line

    NASA Astrophysics Data System (ADS)

    Sullivan, W. T.

    2005-08-01

    Unlike most of the great discoveries in the first decade of radio astronomy after World War II, the 21 cm hydrogen line was first predicted theoretically and then purposely sought. The story is familiar of graduate student Henk van de Hulst's prediction in occupied Holland in 1944 and the nearly simultaneous detection of the line by teams at Harvard, Leiden, and Sydney in 1951. But in this paper I will describe various aspects that are little known: (1) In van de Hulst's original paper he not only worked out possible intensities for the 21 cm line, but also for radio hydrogen recombination lines (not detected until the early 1960s), (2) in that same paper he also used Jansky's and Reber's observations of a radio background to make cosmological conclusions, (3) there was no "race" between the Dutch, Americans, and Australians to detect the line, (4) a fire that destroyed the Dutch team's equipment in March 1950 ironically did not hinder their progress, but actually speeded it up (because it led to a change of their chief engineer, bringing in the talented Lex Muller). The scientific and technical styles of the three groups will also be discussed as results of the vastly differing environments in which they operated.

  14. Smith's Cloud (HVC) in 21 cm HI emission

    NASA Astrophysics Data System (ADS)

    Heroux, A. J.

    2006-12-01

    In studying the continuing formation of the Milky Way, we have used the Green Bank Telescope (GBT) of the NRAO to measure the 21 cm HI emission from a specific high velocity cloud known as “Smith’s Cloud”. This cloud is likely within the bounds of the galaxy and appears to be actively plunging into the disk. Our map covers an area about 10x14 degrees, with data taken every 3’ over this range. Most of the emission is concentrated into a single large structure with an unusual cometary morphology, which displays signs of interaction between the cloud and the Galactic halo. We will present an analysis of the cloud, along with information on possible FIR emission with information gained from the IRAS data, kinematics and likely orbits and paths for the origin and future of the cloud. This research was funded through an NSF REU Grant.

  15. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)

    NASA Astrophysics Data System (ADS)

    Vanderlinde, Keith; Chime Collaboration

    2014-04-01

    Hydrogen Intensity (HI) mapping uses redshifted 21cm emission from neutral hydrogen as a 3D tracer of Large Scale Structure (LSS) in the Universe. Imprinted in the LSS is a remnant of the acoustic waves which propagated through the primordial plasma. This feature, the Baryon Acoustic Oscillation (BAO), has a characteristic scale of ~150 co-moving Mpc, which appears in the spatial correlation of LSS. By charting the evolution of this scale over cosmic time, we trace the expansion history of the Universe, constraining the Dark Energy equation of state as it becomes a significant component, particularly at redshifts poorly probed by current BAO surveys. In this talk I will introduce CHIME, a transit radio interferometer designed specifically for this purpose. CHIME is an ambitious new telescope, being built in British Columbia, Canada, and composed of five 20m x 100m parabolic reflectors which focus radiation in one direction (east-west) while interferometry is used to resolve beams in the other (north-south). Earth rotation sweeps them across the sky, resulting in complete daily coverage of the northern celestial hemisphere. Commissioning is underway on the 40 x 37m "Pathfinder" telescope, and the full sized 100m x 100m instrument is funded and under development.

  16. Prospects for clustering and lensing measurements with forthcoming intensity mapping and optical surveys

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.; Metcalf, R. B.

    2016-06-01

    We explore the potential of using intensity mapping surveys (MeerKAT, SKA) and optical galaxy surveys (DES, LSST) to detect H I clustering and weak gravitational lensing of 21 cm emission in auto- and cross-correlation. Our forecasts show that high-precision measurements of the clustering and lensing signals can be made in the near future using the intensity mapping technique. Such studies can be used to test the intensity mapping method, and constrain parameters such as the H I density Ω _{H I}, the H I bias b_{H I} and the galaxy-H I correlation coefficient r_{H I-g}.

  17. RESEARCH PAPER: Foreground removal of 21 cm fluctuation with multifrequency fitting

    NASA Astrophysics Data System (ADS)

    He, Li-Ping

    2009-06-01

    The 21 centimeter (21 cm) line emission from neutral hydrogen in the intergalactic medium (IGM) at high redshifts is strongly contaminated by foreground sources such as the diffuse Galactic synchrotron emission and free-free emission from the Galaxy, as well as emission from extragalactic radio sources, thus making its observation very complicated. However, the 21 cm signal can be recovered through its structure in frequency space, as the power spectrum of the foreground contamination is expected to be smooth over a wide band in frequency space while the 21 cm fluctuations vary significantly. We use a simple polynomial fitting to reconstruct the 21 cm signal around four frequencies 50, 100, 150 and 200MHz with an especially small channel width of 20 kHz. Our calculations show that this multifrequency fitting approach can effectively recover the 21 cm signal in the frequency range 100 ~ 200 MHz. However, this method doesn't work well around 50 MHz because of the low intensity of the 21 cm signal at this frequency. We also show that the fluctuation of detector noise can be suppressed to a very low level by taking long integration times, which means that we can reach a sensitivity of approx10 mK at 150 MHz with 40 antennas in 120 hours of observations.

  18. 21 cm signals from ionized and heated regions around first stars

    NASA Astrophysics Data System (ADS)

    Fang, Li-Zhi

    2008-01-01

    The 21 cm signals from the UV photon sources of reionization epoch is investigated with solving the radiative transfer equation by the WENO algorithm. The results show that a spherical shell of 21 cm emission and absorption will develop around a point source once the speed of the ionization front (I-front) is significantly lower than the speed of light. The 21 cm shell extends from the I-front to the front of light; its inner part is the emission region and its outer part is the absorption region. The 21 cm emission region depends strongly on the intensity, frequency-spectrum and life-time of the UV ionizing source. At redshift 1+z = 20, for a UV ionizing source with an intensity Ė~=1045 erg s-1 and a power law spectrum ν-α with α = 2, the emission region has a comoving size of 1-3 Mpc at the age of the source to be ~=2 Myr. However, the emission regions are very small, and would even be erased by thermal broadening if the source satisfies one of the following conditions: 1. the intensity is less than Ė~=1043 erg s-1 2. the frequency spectrum is thermal at temperature T~=105 K, and 3. the frequency spectrum is a power law with α>=3. On the other hand, the 21 cm absorption regions are developed in all these cases. For a source of short life-time, no 21 cm emission region can be formed if the source dies out before the I-front speed is significantly lower than the speed of light. Yet, a 21 cm absorption region can form and develop even after the emission of the source ceases.

  19. Foregrounds in Wide-field Redshifted 21 cm Power Spectra

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Jacobs, Daniel C.; Bowman, Judd D.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Dillon, Joshua S.; Emrich, D.; Ewall-Wice, A.; Feng, L.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, Han-Seek; Kittiwisit, P.; Kratzenberg, E.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, Sourabh; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tegmark, M.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2015-05-01

    Detection of 21 cm emission of H i from the epoch of reionization, at redshifts z\\gt 6, is limited primarily by foreground emission. We investigate the signatures of wide-field measurements and an all-sky foreground model using the delay spectrum technique that maps the measurements to foreground object locations through signal delays between antenna pairs. We demonstrate interferometric measurements are inherently sensitive to all scales, including the largest angular scales, owing to the nature of wide-field measurements. These wide-field effects are generic to all observations but antenna shapes impact their amplitudes substantially. A dish-shaped antenna yields the most desirable features from a foreground contamination viewpoint, relative to a dipole or a phased array. Comparing data from recent Murchison Widefield Array observations, we demonstrate that the foreground signatures that have the largest impact on the H i signal arise from power received far away from the primary field of view. We identify diffuse emission near the horizon as a significant contributing factor, even on wide antenna spacings that usually represent structures on small scales. For signals entering through the primary field of view, compact emission dominates the foreground contamination. These two mechanisms imprint a characteristic pitchfork signature on the “foreground wedge” in Fourier delay space. Based on these results, we propose that selective down-weighting of data based on antenna spacing and time can mitigate foreground contamination substantially by a factor of ∼100 with negligible loss of sensitivity.

  20. Differentiating CDM and baryon isocurvature models with 21 cm fluctuations

    SciTech Connect

    Kawasaki, Masahiro; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: sekiguti@icrr.u-tokyo.ac.jp

    2011-10-01

    We discuss how one can discriminate models with cold dark matter (CDM) and baryon isocurvature fluctuations. Although current observations such as cosmic microwave background (CMB) can severely constrain the fraction of such isocurvature modes in the total density fluctuations, CMB cannot differentiate CDM and baryon ones by the shapes of their power spectra. However, the evolution of CDM and baryon density fluctuations are different for each model, thus it would be possible to discriminate those isocurvature modes by extracting information on the fluctuations of CDM/baryon itself. We discuss that observations of 21 cm fluctuations can in principle differentiate these modes and demonstrate to what extent we can distinguish them with future 21 cm surveys. We show that, when the isocurvature mode has a large blue-tilted initial spectrum, 21 cm surveys can clearly probe the difference.

  1. 21 cm radiation: A new probe of fundamental physics

    NASA Astrophysics Data System (ADS)

    Khatri, Rishi; Wandelt, Benjamin D.

    2010-11-01

    New low frequency radio telescopes currently being built open up the possibility of observing the 21 cm radiation from redshifts 200 > z > 30, also known as the dark ages, see Furlanetto, Oh, & Briggs(2006) for a review. At these high redshifts, Cosmic Microwave Background (CMB) radiation is absorbed by neutral hydrogen at its 21 cm hyperfine transition. This redshifted 21 cm signal thus carries information about the state of the early Universe and can be used to test fundamental physics. The 21 cm radiation probes a volume of the early Universe on kpc scales in contrast with CMB which probes a surface (of some finite thickness) on Mpc scales. Thus there is many orders of more information available, in principle, from the 21 cm observations of dark ages. We have studied the constraints these observations can put on the variation of fundamental constants (Khatri & Wandelt(2007)). Since the 21 cm signal depends on atomic physics it is very sensitive to the variations in the fine structure constant and can place constraints comparable to or better than the other astrophysical experiments (Δα/α= < 10-5) as shown in Figure 1. Making such observations will require radio telescopes of collecting area 10 - 106 km2 compared to ~ 1 km2 of current telescopes, for example LOFAR. We should also expect similar sensitivity to the electron to proton mass ratio. One of the challenges in observing this 21 cm cosmological signal is the presence of the synchrotron foregrounds which is many orders of magnitude larger than the cosmological signal but the two can be separated because of their different statistical nature (Zaldarriaga, Furlanetto, & Hernquist(2004)). Terrestrial EM interference from radio/TV etc. and Earth's ionosphere poses problems for telescopes on ground which may be solved by going to the Moon and there are proposals for doing so, one of which is the Dark Ages Lunar Interferometer (DALI). In conclusion 21 cm cosmology promises a large wealth of data and provides

  2. The rise of the first stars: Supersonic streaming, radiative feedback, and 21-cm cosmology

    NASA Astrophysics Data System (ADS)

    Barkana, Rennan

    2016-07-01

    between the dark matter and gas. This effect enhanced large-scale clustering and, if early 21-cm fluctuations were dominated by small galactic halos, it produced a prominent pattern on 100 Mpc scales. Work in this field, focused on understanding the whole era of reionization and cosmic dawn with analytical models and numerical simulations, is likely to grow in intensity and importance, as the theoretical predictions are finally expected to confront 21-cm observations in the coming years.

  3. Prospects of Detecting HI using Redshifted 21-cm Radiation at z˜3

    NASA Astrophysics Data System (ADS)

    Gehlot, Bharat Kumar; Bagla, J. S.

    2017-03-01

    Distribution of cold gas in the post-reionization era provides an important link between distribution of galaxies and the process of star formation. Redshifted 21-cm radiation from the hyperfine transition of neutral hydrogen allows us to probe the neutral component of cold gas, most of which is to be found in the interstellar medium of galaxies. Existing and upcoming radio telescopes can probe the large scale distribution of neutral hydrogen via HI intensity mapping. In this paper, we use an estimate of the HI power spectrum derived using an ansatz to compute the expected signal from the large scale HI distribution at z˜3. We find that the scale dependence of bias at small scales makes a significant difference to the expected signal even at large angular scales. We compare the predicted signal strength with the sensitivity of radio telescopes that can observe such radiation and calculate the observation time required for detecting neutral hydrogen at these redshifts. We find that OWFA (Ooty Wide Field Array) offers the best possibility to detect neutral hydrogen at z˜3 before the SKA (Square Kilometer Array) becomes operational. We find that the OWFA should be able to make a 3 σ or a more significant detection in 2000 hours of observations at several angular scales. Calculations done using the Fisher matrix approach indicate that a 5 σ detection of the binned HI power spectrum via measurement of the amplitude of the HI power spectrum is possible in 1000 h (Sarkar et al. 2017).

  4. INTERPRETING THE GLOBAL 21 cm SIGNAL FROM HIGH REDSHIFTS. I. MODEL-INDEPENDENT CONSTRAINTS

    SciTech Connect

    Mirocha, Jordan; Harker, Geraint J. A.; Burns, Jack O.

    2013-11-10

    The sky-averaged (global) 21 cm signal is a powerful probe of the intergalactic medium (IGM) prior to the completion of reionization. However, so far it has been unclear whether it will provide more than crude estimates of when the universe's first stars and black holes formed, even in the best case scenario in which the signal is accurately extracted from the foregrounds. In contrast to previous work, which has focused on predicting the 21 cm signatures of the first luminous objects, we investigate an arbitrary realization of the signal and attempt to translate its features to the physical properties of the IGM. Within a simplified global framework, the 21 cm signal yields quantitative constraints on the Lyα background intensity, net heat deposition, ionized fraction, and their time derivatives without invoking models for the astrophysical sources themselves. The 21 cm absorption signal is most easily interpreted, setting strong limits on the heating rate density of the universe with a measurement of its redshift alone, independent of the ionization history or details of the Lyα background evolution. In a companion paper, we extend these results, focusing on the confidence with which one can infer source emissivities from IGM properties.

  5. Precision measurement of cosmic magnification from 21 cm emitting galaxies

    SciTech Connect

    Zhang, Pengjie; Pen, Ue-Li; /Canadian Inst. Theor. Astrophys.

    2005-04-01

    We show how precision lensing measurements can be obtained through the lensing magnification effect in high redshift 21cm emission from galaxies. Normally, cosmic magnification measurements have been seriously complicated by galaxy clustering. With precise redshifts obtained from 21cm emission line wavelength, one can correlate galaxies at different source planes, or exclude close pairs to eliminate such contaminations. We provide forecasts for future surveys, specifically the SKA and CLAR. SKA can achieve percent precision on the dark matter power spectrum and the galaxy dark matter cross correlation power spectrum, while CLAR can measure an accurate cross correlation power spectrum. The neutral hydrogen fraction was most likely significantly higher at high redshifts, which improves the number of observed galaxies significantly, such that also CLAR can measure the dark matter lensing power spectrum. SKA can also allow precise measurement of lensing bispectrum.

  6. The future of primordial features with 21 cm tomography

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Meerburg, P. Daniel; Münchmeyer, Moritz

    2016-09-01

    Detecting a deviation from a featureless primordial power spectrum of fluctuations would give profound insight into the physics of the primordial Universe. Depending on their nature, primordial features can either provide direct evidence for the inflation scenario or pin down details of the inflation model. Thus far, using the cosmic microwave background (CMB) we have only been able to put stringent constraints on the amplitude of features, but no significant evidence has been found for such signals. Here we explore the limit of the experimental reach in constraining such features using 21 cm tomography at high redshift. A measurement of the 21 cm power spectrum from the Dark Ages is generally considered as the ideal experiment for early Universe physics, with potentially access to a large number of modes. We consider three different categories of theoretically motivated models: the sharp feature models, resonance models, and standard clock models. We study the improvements on bounds on features as a function of the total number of observed modes and identify parameter degeneracies. The detectability depends critically on the amplitude, frequency and scale-location of the features, as well as the angular and redshift resolution of the experiment. We quantify these effects by considering different fiducial models. Our forecast shows that a cosmic variance limited 21 cm experiment measuring fluctuations in the redshift range 30 <= z <= 100 with a 0.01-MHz bandwidth and sub-arcminute angular resolution could potentially improve bounds by several orders of magnitude for most features compared to current Planck bounds. At the same time, 21 cm tomography also opens up a unique window into features that are located on very small scales.

  7. Exploring 21cm-Lyman Alpha Emitter Synergies for SKA

    NASA Astrophysics Data System (ADS)

    Hutter, Anne; Dayal, Pratika; Müller, Volker; Trott, Cathryn M.

    2017-02-01

    We study the signatures of reionization and ionizing properties of early galaxies in the cross-correlations between the 21 cm emission from the spin-flip transition of neutral hydrogen (H i) and the underlying galaxy population. In particular, we focus on a sub-population of galaxies visible as Lyα Emitters (LAEs). With both observables simultaneously derived from a z≃ 6.6 hydrodynamical simulation (GADGET-2) snapshot post-processed with a radiative transfer code (pCRASH) and a dust model, we perform a parameter study and aim to constrain both the average intergalactic medium (IGM) ionization state (1-< {χ }{{H}{{I}}}> ) and the reionization topology (outside-in versus inside-out). We find that, in our model, LAEs occupy the densest and most-ionized regions resulting in a very strong anti-correlation between the LAEs and the 21 cm emission. A 1000 hr Square Kilometer Array (SKA)-LOW1—Subaru Hyper Suprime-Cam experiment can provide constraints on < {χ }{{H}{{I}}}> , allowing us to distinguish between IGM ionization levels of 50%, 25%, 10%, and fully ionized at scales r≲ 10 comoving Mpc (assuming foreground avoidance for SKA). Our results support the inside-out reionization scenario where the densest knots (under-dense voids) are ionized first (last) for < {χ }{{H}{{I}}}> ≳ 0.1. Further, 1000 hr SKA-LOW1 observations should be able to confirm the inside-out scenario by detecting a lower 21 cm brightness temperature (by about 2–10 mK) in the densest regions (≳2 arcmin scales) hosting LAEs, compared to lower-density regions devoid of them.

  8. Measuring the Cosmological 21 cm Monopole with an Interferometer

    NASA Astrophysics Data System (ADS)

    Presley, Morgan E.; Liu, Adrian; Parsons, Aaron R.

    2015-08-01

    A measurement of the cosmological 21 {cm} signal remains a promising but as-of-yet unattained ambition of radio astronomy. A positive detection would provide direct observations of key unexplored epochs of our cosmic history, including the cosmic dark ages and reionization. In this paper, we concentrate on measurements of the spatial monopole of the 21 {cm} brightness temperature as a function of redshift (the “global signal”). Most global experiments to date have been single-element experiments. In this paper, we show how an interferometer can be designed to be sensitive to the monopole mode of the sky, thus providing an alternate approach to accessing the global signature. We provide simple rules of thumb for designing a global signal interferometer and use numerical simulations to show that a modest array of tightly packed antenna elements with moderately sized primary beams (FWHM of ∼ 40^\\circ ) can compete with typical single-element experiments in their ability to constrain phenomenological parameters pertaining to reionization and the pre-reionization era. We also provide a general data analysis framework for extracting the global signal from interferometric measurements (with analysis of single-element experiments arising as a special case) and discuss trade-offs with various data analysis choices. Given that interferometric measurements are able to avoid a number of systematics inherent in single-element experiments, our results suggest that interferometry ought to be explored as a complementary way to probe the global signal.

  9. Probing patchy reionization through τ-21 cm correlation statistics

    SciTech Connect

    Meerburg, P. Daniel; Spergel, David N.; Dvorkin, Cora E-mail: dns@astro.princeton.edu

    2013-12-20

    We consider the cross-correlation between free electrons and neutral hydrogen during the epoch of reionization (EoR). The free electrons are traced by the optical depth to reionization τ, while the neutral hydrogen can be observed through 21 cm photon emission. As expected, this correlation is sensitive to the detailed physics of reionization. Foremost, if reionization occurs through the merger of relatively large halos hosting an ionizing source, the free electrons and neutral hydrogen are anticorrelated for most of the reionization history. A positive contribution to the correlation can occur when the halos that can form an ionizing source are small. A measurement of this sign change in the cross-correlation could help disentangle the bias and the ionization history. We estimate the signal-to-noise ratio of the cross-correlation using the estimator for inhomogeneous reionization τ-hat {sub ℓm} proposed by Dvorkin and Smith. We find that with upcoming radio interferometers and cosmic microwave background (CMB) experiments, the cross-correlation is measurable going up to multipoles ℓ ∼ 1000. We also derive parameter constraints and conclude that, despite the foregrounds, the cross-correlation provides a complementary measurement of the EoR parameters to the 21 cm and CMB polarization autocorrelations expected to be observed in the coming decade.

  10. Foregrounds for observations of the cosmological 21 cm line. II. Westerbork observations of the fields around 3C 196 and the North Celestial Pole

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; de Bruyn, A. G.; Harker, G.; Brentjens, M. A.; Ciardi, B.; Jelić, V.; Koopmans, L. V. E.; Labropoulos, P.; Offringa, A.; Pandey, V. N.; Schaye, J.; Thomas, R. M.; Yatawatta, S.; Zaroubi, S.

    2010-11-01

    Context. In the coming years a new insight into galaxy formation and the thermal history of the Universe is expected to come from the detection of the highly redshifted cosmological 21 cm line. Aims: The cosmological 21 cm line signal is buried under Galactic and extragalactic foregrounds which are likely to be a few orders of magnitude brighter. Strategies and techniques for effective subtraction of these foreground sources require a detailed knowledge of their structure in both intensity and polarization on the relevant angular scales of 1-30 arcmin. Methods: We present results from observations conducted with the Westerbork telescope in the 140-160 MHz range with 2 arcmin resolution in two fields located at intermediate Galactic latitude, centred around the bright quasar 3C 196 and the North Celestial Pole. They were observed with the purpose of characterizing the foreground properties in sky areas where actual observations of the cosmological 21 cm line could be carried out. The polarization data were analysed through the rotation measure synthesis technique. We have computed total intensity and polarization angular power spectra. Results: Total intensity maps were carefully calibrated, reaching a high dynamic range, 150 000:1 in the case of the 3C 196 field. No evidence of diffuse Galactic emission was found in the angular power spectrum analysis on scales smaller than ~10 arcmin in either of the two fields. On these angular scales the signal is consistent with the classical confusion noise of ~3 mJy beam-1. On scales greater than 30 arcmin we found an excess of power attributed to the Galactic foreground with an rms of 3.4 K and 5.5 K for the 3C 196 and the NCP field respectively. The intermediate angular scales suffered from systematic errors which prevented any detection. Patchy polarized emission was found only in the 3C 196 field whereas the polarization in the NCP area was essentially due to radio frequency interference. The polarized signal in the 3C

  11. PAPER-128 Status Update: Towards a 21cm Power Spectrum Detection

    NASA Astrophysics Data System (ADS)

    Cheng, Carina; Jacobs, Danny; Aryeh Kohn, Saul; Parsons, Aaron; PAPER Collaboration

    2016-01-01

    The Epoch of Reionization (EoR) represents an unexplored phase in cosmic history when UV photons from the first galaxies ionized the majority of the hydrogen in the intergalactic medium. The Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) is a dedicated experiment that aims to measure EoR fluctuations by mapping the red-shifted 21cm hyperfine transition of neutral hydrogen. While PAPER-64 put the most constraining upper limits on the 21cm power spectrum to date, PAPER-128 is forecast to offer a factor of 4 increase in sensitivity, putting it in the range of plausible predicted signal levels. We present a status update of our ongoing PAPER-128 data analysis efforts, including new insights into data quality assessment, calibration, and foreground removal. As we continue our pursuit of the cosmological signal, the lessons we have learned with PAPER are an integral component for next generation 21cm experiments like the Hydrogen Epoch of Reionization Array (HERA).

  12. The Murchison Widefield Array 21 cm Power Spectrum Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Hazelton, B. J.; Trott, C. M.; Dillon, Joshua S.; Pindor, B.; Sullivan, I. S.; Pober, J. C.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Ewall-Wice, A.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Neben, A. R.; Thyagarajan, N.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, S.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Tingay, S. J.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-07-01

    We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.

  13. Cosmic (Super)String Constraints from 21 cm Radiation

    SciTech Connect

    Khatri, Rishi; Wandelt, Benjamin D.

    2008-03-07

    We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z{>=}30. Future experiments can exploit this effect to constrain the cosmic string tension G{mu} and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of {approx}1 km{sup 2} will not provide any useful constraints, future experiments with a collecting area of 10{sup 4}-10{sup 6} km{sup 2} covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G{mu} > or approx. 10{sup -10}-10{sup -12} (superstring/phase transition mass scale >10{sup 13} GeV)

  14. HI Intensity Mapping with FAST

    NASA Astrophysics Data System (ADS)

    Bigot-Sazy, M.-A.; Ma, Y.-Z.; Battye, R. A.; Browne, I. W. A.; Chen, T.; Dickinson, C.; Harper, S.; Maffei, B.; Olivari, L. C.; Wilkinsondagger, P. N.

    2016-02-01

    We discuss the detectability of large-scale HI intensity fluctuations using the FAST telescope. We present forecasts for the accuracy of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. The FAST 19-beam L-band receivers (1.05-1.45 GHz) can provide constraints on the matter power spectrum and dark energy equation of state parameters (w0,wa) that are comparable to the BINGO and CHIME experiments. For one year of integration time we find that the optimal survey area is 6000 deg2. However, observing with larger frequency coverage at higher redshift (0.95-1.35 GHz) improves the projected errorbars on the HI power spectrum by more than 2 σ confidence level. The combined constraints from FAST, CHIME, BINGO and Planck CMB observations can provide reliable, stringent constraints on the dark energy equation of state.

  15. Imaging the redshifted 21 cm pattern around the first sources during the cosmic dawn using the SKA

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.; Choudhuri, Samir

    2017-01-01

    Understanding properties of the first sources in the Universe using the redshifted H I 21 cm signal is one of the major aims of present and upcoming low-frequency experiments. We investigate the possibility of imaging the redshifted 21 cm pattern around the first sources during the cosmic dawn using the SKA1-low. We model the H I 21 cm image maps, appropriate for the SKA1-low, around the first sources consisting of stars and X-ray sources within galaxies. In addition to the system noise, we also account for the astrophysical foregrounds by adding them to the signal maps. We find that after subtracting the foregrounds using a polynomial fit and suppressing the noise by smoothing the maps over 10-30 arcmin angular scale, the isolated sources at z ˜ 15 are detectable with the ˜4σ-9σ confidence level in 2000 h of observation with the SKA1-low. Although the 21 cm profiles around the sources get altered because of the Gaussian smoothing, the images can still be used to extract some of the source properties. We account for overlaps in the patterns of the individual sources by generating realistic H I 21 cm maps of the cosmic dawn that are based on N-body simulations and a one-dimensional radiative transfer code. We find that these sources should be detectable in the SKA1-low images at z = 15 with a signal-to-noise ratio (SNR) of ˜14(4) in 2000 (200) h of observations. One possible observational strategy thus could be to observe multiple fields for shorter observation times, identify fields with SNR ≳ 3 and observe these fields for much longer duration. Such observations are expected to be useful in constraining the parameters related to the first sources.

  16. Intensity Mapping of Lyα Emission during the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Silva, Marta B.; Santos, Mario G.; Gong, Yan; Cooray, Asantha; Bock, James

    2013-02-01

    We calculate the absolute intensity and anisotropies of the Lyα radiation field present during the epoch of reionization. We consider emission from both galaxies and the intergalactic medium (IGM) and take into account the main contributions to the production of Lyα photons: recombinations, collisions, continuum emission from galaxies, and scattering of Lyn photons in the IGM. We find that the emission from individual galaxies dominates over the IGM with a total Lyα intensity (times frequency) of about (1.43-3.57) × 10-8 erg s-1 cm-2 sr-1 at a redshift of 7. This intensity level is low, so it is unlikely that the Lyα background during reionization can be established by an experiment aiming at an absolute background light measurement. Instead, we consider Lyα intensity mapping with the aim of measuring the anisotropy power spectrum that has rms fluctuations at the level of 1 × 10-16 [erg s-1 cm-2 sr-1]2 at a few Mpc scales. These anisotropies could be measured with a spectrometer at near-IR wavelengths from 0.9 to 1.4 μm with fields in the order of 0.5 to 1 deg2. We recommend that existing ground-based programs using narrowband filters also pursue intensity fluctuations to study statistics on the spatial distribution of faint Lyα emitters. We also discuss the cross-correlation signal with 21 cm experiments that probe H I in the IGM during reionization. A dedicated sub-orbital or space-based Lyα intensity mapping experiment could provide a viable complimentary approach to probe reionization, when compared to 21 cm experiments, and is likely within experimental reach.

  17. Tracing the Milky Way Nuclear Wind with 21cm Atomic Hydrogen Emission

    NASA Astrophysics Data System (ADS)

    Lockman, Felix J.; McClure-Griffiths, N. M.

    2016-08-01

    There is evidence in 21 cm H i emission for voids several kiloparsecs in size centered approximately on the Galactic center, both above and below the Galactic plane. These appear to map the boundaries of the Galactic nuclear wind. An analysis of H i at the tangent points, where the distance to the gas can be estimated with reasonable accuracy, shows a sharp transition at Galactic radii R ≲ 2.4 kpc from the extended neutral gas layer characteristic of much of the Galactic disk, to a thin Gaussian layer with FWHM ˜ 125 pc. An anti-correlation between H i and γ-ray emission at latitudes 10^\\circ ≤slant | b| ≤slant 20^\\circ suggests that the boundary of the extended H i layer marks the walls of the Fermi Bubbles. With H i, we are able to trace the edges of the voids from | z| \\gt 2 {{kpc}} down to z ≈ 0, where they have a radius ˜2 kpc. The extended Hi layer likely results from star formation in the disk, which is limited largely to R ≳ 3 kpc, so the wind may be expanding into an area of relatively little H i. Because the H i kinematics can discriminate between gas in the Galactic center and foreground material, 21 cm H i emission may be the best probe of the extent of the nuclear wind near the Galactic plane.

  18. Measuring the Epoch of Reionization using [CII] Intensity Mapping with TIME-Pilot

    NASA Astrophysics Data System (ADS)

    Crites, Abigail; Bock, James; Bradford, Matt; Bumble, Bruce; Chang, Tzu-Ching; Cheng, Yun-Ting; Cooray, Asantha R.; Hailey-Dunsheath, Steve; Hunacek, Jonathon; Li, Chao-Te; O'Brient, Roger; Shirokoff, Erik; Staniszewski, Zachary; Shiu, Corwin; Uzgil, Bade; Zemcov, Michael B.; Sun, Guochao

    2017-01-01

    TIME-Pilot (the Tomographic Ionized carbon Intensity Mapping Experiment) is a new instrument designed to probe the epoch of reionization (EoR) by measuring the 158 um ionized carbon emission line [CII] from redshift 5 - 9. TIME-Pilot will also probe the molecular gas content of the universe during the epoch spanning the peak of star formation (z ~ 1 -3) by making an intensity mapping measurement of the CO transitions in the TIME-Pilot band (CO(3-2), CO(4-3), CO(5-4), and CO(6-5)). I will describe the instrument we are building which is an R of ~100 spectrometer sensitive to the 200-300 GHz radiation. The camera is designed to measure the line emission from galaxies using an intensity mapping technique. This instrument will allow us to detect the [CII] clustering fluctuations from faint galaxies during EoR and compare these measurements to predicted [CII] amplitudes from current models. The CO measurements will allow us to constrain models for galaxies at lower redshift. The [CII] intensity mapping measurements that will be made with TIME-Pilot and detailed measurements made with future more sensitive mm-wavelength spectrometers are complimentary to 21-cm measurements of the EoR and complimentary to direct detections of high redshift galaxies with HST, ALMA, and, in the future, JWST.

  19. De-coding the Neutral Hydrogen (21cm) Line Profiles of Disk galaxies

    NASA Astrophysics Data System (ADS)

    Moak, Sandy; Madore, Barry; Khatami, David

    2017-01-01

    Neutral hydrogen is the most abundant element in the interstellar medium, and it has long lent astronomers insight into galaxy structure, galactic interactions, and even dark matter prevalence. It is necessary to implement a detailed coding scheme that characterizes the 21-cm HI line profiles which exist in abundance throughout literature. We have utilized a new computer simulation program that exposes the internal architecture of a galaxy by way of mapping the one-dimensional line profile on to the three-dimensional parameters of a given galaxy. We have created a naming system to classify HI line profiles, which represents a kinematic description of the galaxy simply by considering its classification within the coding scheme.

  20. Intensity Mapping of Hα, Hβ, [OII], and [OIII] Lines at z < 5

    NASA Astrophysics Data System (ADS)

    Gong, Yan; Cooray, Asantha; Silva, Marta B.; Zemcov, Michael; Feng, Chang; Santos, Mario G.; Dore, Olivier; Chen, Xuelei

    2017-02-01

    Intensity mapping is becoming a useful tool to study the large-scale structure of the universe through spatial variations in the integrated emission from galaxies and the intergalactic medium. We study intensity mapping of the {{H}}α 6563 \\mathringA , [O iii] 5007 Å, [O ii] 3727 Å, and {{H}}β 4861 \\mathringA lines at 0.8≤slant z≤slant 5.2. The mean intensities of these four emission lines are estimated using the observed luminosity functions (LFs), cosmological simulations, and the star formation rate density (SFRD) derived from observations at z≲ 5. We calculate the intensity power spectra and consider the foreground contamination of other lines at lower redshifts. We use the proposed NASA small explorer SPHEREx (the Spectro-Photometer for the History of the universe, Epoch of Reionization, and Ices Explorer) as a case study for the detectability of the intensity power spectra of the four emission lines. We also investigate the cross-correlation with the 21 cm line probed by the Canadian Hydrogen Intensity Mapping Experiment (CHIME), Tianlai experiment and the Square Kilometer Array (SKA) at 0.8≤slant z≤slant 2.4. We find both the auto and cross power spectra can be well measured for the Hα, [O iii] and [O ii] lines at z≲ 3, while it is more challenging for the Hβ line. Finally, we estimate the constraint on the SFRD from intensity mapping, and find we can reach an accuracy higher than 7% at z≲ 4, which is better than with the usual method of measurements using the LFs of galaxies.

  1. Modeling the neutral hydrogen distribution in the post-reionization Universe: intensity mapping

    SciTech Connect

    Villaescusa-Navarro, Francisco; Viel, Matteo; Datta, Kanan K.; Choudhury, T. Roy E-mail: viel@oats.inaf.it E-mail: tirth@ncra.tifr.res.in

    2014-09-01

    We model the distribution of neutral hydrogen (HI) in the post-reionization era and investigate its detectability in 21 cm intensity mapping with future radio telescopes like the Square Kilometer array (SKA). We rely on high resolution hydrodynamical N-body simulations that have a state-of-the-art treatment of the low density photoionized gas in the inter-galactic medium (IGM). The HI is assigned a-posteriori to the gas particles following two different approaches: a halo-based method in which HI is assigned only to gas particles residing within dark matter halos; a particle-based method that assigns HI to all gas particles using a prescription based on the physical properties of the particles. The HI statistical properties are then compared to the observational properties of Damped Lyman-α Absorbers (DLAs) and of lower column density systems and reasonable good agreement is found for all the cases. Among the halo-based method, we further consider two different schemes that aim at reproducing the observed properties of DLAs by distributing HI inside halos: one of this results in a much higher bias for DLAs, in agreement with recent observations, which boosts the 21 cm power spectrum by a factor ∼ 4 with respect to the other recipe. Furthermore, we quantify the contribution of HI in the diffuse IGM to both Ω{sub HI} and the HI power spectrum finding to be subdominant in both cases. We compute the 21 cm power spectrum from the simulated HI distribution and calculate the expected signal for both SKA1-mid and SKA1-low configurations at 2.4 ≤ z ≤ 4. We find that SKA will be able to detect the 21 cm power spectrum, in the non-linear regime, up to k ∼ 1 h/Mpc for SKA1-mid and k ∼ 5 h/Mpc for SKA1-low with 100 hours of observations. We also investigate the perspective of imaging the HI distribution. Our findings indicate that SKA1-low could detect the most massive HI peaks with a signal to noise ratio (SNR) higher than 5 for an observation time of about 1000

  2. 21cm bispectrum as method to measure cosmic dawn and EoR

    NASA Astrophysics Data System (ADS)

    Shimabukuro, H.

    2016-12-01

    Cosmological 21cm signal is a promising tool to investigate the state of the Inter Galactic Medium (IGM) during cosmic dawn (CD) and Epoch of Reionization (EoR). Ongoing telescopes such as MWA,LOFAR,PAPER and future telescopes like SKA are expected to detect cosmological 21cm signal. Statistical analysis of the 21cm signal is very important to extract information of the IGM which is related to nature of galaxies and first generation stars. We expect that cosmological 21cm signal follows non-gaussian distribution because various astrophysical processes deviate the distribution from gaussian. In order to evaluate the non-gaussian features, we introduce the bispectrum of the 21cm signal and discuss the property of the 21cm bispectrum such as redshift dependence and configuration dependence. We found that the we can see correlation between large scales and small scales via the 21cm bispectrum and also found that the 21cm bispectrum can give the information of matter fluctuation, neural fraction fluctuation and spin temperature fluctuation by means of its configure dependence.

  3. Optical mapping at increased illumination intensities

    NASA Astrophysics Data System (ADS)

    Kanaporis, Giedrius; Martišienė, Irma; Jurevičius, Jonas; Vosyliūtė, Rūta; Navalinskas, Antanas; Treinys, Rimantas; Matiukas, Arvydas; Pertsov, Arkady M.

    2012-09-01

    Voltage-sensitive fluorescent dyes have become a major tool in cardiac and neuro-electrophysiology. Achieving high signal-to-noise ratios requires increased illumination intensities, which may cause photobleaching and phototoxicity. The optimal range of illumination intensities varies for different dyes and must be evaluated individually. We evaluate two dyes: di-4-ANBDQBS (excitation 660 nm) and di-4-ANEPPS (excitation 532 nm) in the guinea pig heart. The light intensity varies from 0.1 to 5 mW/mm2, with the upper limit at 5 to 10 times above values reported in the literature. The duration of illumination was 60 s, which in guinea pigs corresponds to 300 beats at a normal heart rate. Within the identified duration and intensity range, neither dye shows significant photobleaching or detectable phototoxic effects. However, light absorption at higher intensities causes noticeable tissue heating, which affects the electrophysiological parameters. The most pronounced effect is a shortening of the action potential duration, which, in the case of 532-nm excitation, can reach ˜30%. At 660-nm excitation, the effect is ˜10%. These findings may have important implications for the design of optical mapping protocols in biomedical applications.

  4. High redshift signatures in the 21 cm forest due to cosmic string wakes

    SciTech Connect

    Tashiro, Hiroyuki; Sekiguchi, Toyokazu; Silk, Joseph E-mail: toyokazu.sekiguchi@nagoya-u.jp

    2014-01-01

    Cosmic strings induce minihalo formation in the early universe. The resultant minihalos cluster in string wakes and create a ''21 cm forest'' against the cosmic microwave background (CMB) spectrum. Such a 21 cm forest can contribute to angular fluctuations of redshifted 21 cm signals integrated along the line of sight. We calculate the root-mean-square amplitude of the 21 cm fluctuations due to strings and show that these fluctuations can dominate signals from minihalos due to primordial density fluctuations at high redshift (z∼>10), even if the string tension is below the current upper bound, Gμ < 1.5 × 10{sup −7}. Our results also predict that the Square Kilometre Array (SKA) can potentially detect the 21 cm fluctuations due to strings with Gμ ≈ 7.5 × 10{sup −8} for the single frequency band case and 4.0 × 10{sup −8} for the multi-frequency band case.

  5. Studying topological structure of 21-cm line fluctuations with 3D Minkowski functionals before reionization

    NASA Astrophysics Data System (ADS)

    Yoshiura, Shintaro; Shimabukuro, Hayato; Takahashi, Keitaro; Matsubara, Takahiko

    2017-02-01

    The brightness temperature of the redshifted 21-cm line brings rich information about the intergalactic medium (IGM) from the cosmic dawn and epoch of reionization (EoR). While the power spectrum is a useful tool to investigate the 21-cm signal statistically, the 21-cm brightness temperature field is highly non-Gaussian and the power spectrum is inadequate to characterize the non-Gaussianity. Minkowski functionals (MFs) are promising tools to extract non-Gaussian features of the 21-cm signal and give topological information, such as morphology of ionized bubbles. In this work, we study the 21-cm line signal in detail with MFs. To promote understanding of basic features of the 21-cm signal, we calculate the MFs of not only the hydrogen neutral fraction but also the matter density and spin temperature, which contribute to brightness-temperature fluctuations. We find that the structure of the brightness temperature depends mainly on the ionized fraction and the spin temperature at late and early stages of the EoR, respectively. Further, we investigate the redshift evolution of MFs at 7 < z < 20. We find that, after the onset of reionization, MFs mainly reflect the ionized bubble property. In addition, MFs are sensitive to model parameters related to the topology of ionized bubbles and we consider the possibility of constraining the parameters using future 21-cm signal observations.

  6. Probing Reionization with the Cross-power Spectrum of 21 cm and Near-infrared Radiation Backgrounds

    NASA Astrophysics Data System (ADS)

    Mao, Xiao-Chun

    2014-08-01

    The cross-correlation between the 21 cm emission from the high-redshift intergalactic medium and the near-infrared (NIR) background light from high-redshift galaxies promises to be a powerful probe of cosmic reionization. In this paper, we investigate the cross-power spectrum during the epoch of reionization. We employ an improved halo approach to derive the distribution of the density field and consider two stellar populations in the star formation model: metal-free stars and metal-poor stars. The reionization history is further generated to be consistent with the electron-scattering optical depth from cosmic microwave background measurements. Then, the intensity of the NIR background is estimated by collecting emission from stars in first-light galaxies. On large scales, we find that the 21 cm and NIR radiation backgrounds are positively correlated during the very early stages of reionization. However, these two radiation backgrounds quickly become anti-correlated as reionization proceeds. The maximum absolute value of the cross-power spectrum is |\\Delta ^2_{21,NIR}|\\sim 10^{-4} mK nW m-2 sr-1, reached at l ~ 1000 when the mean fraction of ionized hydrogen is \\bar{x}_{i}\\sim 0.9. We find that Square Kilometer Array can measure the 21 cm-NIR cross-power spectrum in conjunction with mild extensions to the existing CIBER survey, provided that the integration time independently adds up to 1000 and 1 hr for 21 cm and NIR observations, and that the sky coverage fraction of the CIBER survey is extended from 4 × 10-4 to 0.1. Measuring the cross-correlation signal as a function of redshift provides valuable information on reionization and helps confirm the origin of the "missing" NIR background.

  7. Probing reionization with the cross-power spectrum of 21 cm and near-infrared radiation backgrounds

    SciTech Connect

    Mao, Xiao-Chun

    2014-08-01

    The cross-correlation between the 21 cm emission from the high-redshift intergalactic medium and the near-infrared (NIR) background light from high-redshift galaxies promises to be a powerful probe of cosmic reionization. In this paper, we investigate the cross-power spectrum during the epoch of reionization. We employ an improved halo approach to derive the distribution of the density field and consider two stellar populations in the star formation model: metal-free stars and metal-poor stars. The reionization history is further generated to be consistent with the electron-scattering optical depth from cosmic microwave background measurements. Then, the intensity of the NIR background is estimated by collecting emission from stars in first-light galaxies. On large scales, we find that the 21 cm and NIR radiation backgrounds are positively correlated during the very early stages of reionization. However, these two radiation backgrounds quickly become anti-correlated as reionization proceeds. The maximum absolute value of the cross-power spectrum is |Δ{sub 21,NIR}{sup 2}|∼10{sup −4} mK nW m{sup –2} sr{sup –1}, reached at ℓ ∼ 1000 when the mean fraction of ionized hydrogen is x-bar{sub i}∼0.9. We find that Square Kilometer Array can measure the 21 cm-NIR cross-power spectrum in conjunction with mild extensions to the existing CIBER survey, provided that the integration time independently adds up to 1000 and 1 hr for 21 cm and NIR observations, and that the sky coverage fraction of the CIBER survey is extended from 4 × 10{sup –4} to 0.1. Measuring the cross-correlation signal as a function of redshift provides valuable information on reionization and helps confirm the origin of the 'missing' NIR background.

  8. MITEoR: a scalable interferometer for precision 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Buza, V.; Dillon, J. S.; Gharibyan, H.; Hickish, J.; Kunz, E.; Liu, A.; Losh, J.; Lutomirski, A.; Morrison, S.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Tribiano, S. M.; Valdez, M.; Yang, H.; Adami, K. Zarb; Zelko, I.; Zheng, K.; Armstrong, R. P.; Bradley, R. F.; Dexter, M. R.; Ewall-Wice, A.; Magro, A.; Matejek, M.; Morgan, E.; Neben, A. R.; Pan, Q.; Penna, R. F.; Peterson, C. M.; Su, M.; Villasenor, J.; Williams, C. L.; Zhu, Y.

    2014-12-01

    We report on the MIT Epoch of Reionization (MITEoR) experiment, a pathfinder low-frequency radio interferometer whose goal is to test technologies that improve the calibration precision and reduce the cost of the high-sensitivity 3D mapping required for 21 cm cosmology. MITEoR accomplishes this by using massive baseline redundancy, which enables both automated precision calibration and correlator cost reduction. We demonstrate and quantify the power and robustness of redundancy for scalability and precision. We find that the calibration parameters precisely describe the effect of the instrument upon our measurements, allowing us to form a model that is consistent with χ2 per degree of freedom <1.2 for as much as 80 per cent of the observations. We use these results to develop an optimal estimator of calibration parameters using Wiener filtering, and explore the question of how often and how finely in frequency visibilities must be reliably measured to solve for calibration coefficients. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious Hydrogen Epoch of Reionization Array project and other next-generation instruments, which would incorporate many identical or similar technologies.

  9. Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data

    NASA Astrophysics Data System (ADS)

    Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.

    2017-02-01

    We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.

  10. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Technical Reports Server (NTRS)

    Switzer, Eric R.; Liu, Adrian

    2014-01-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z approx. 6 - 30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal, and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line-of-sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least approx. 10(exp -4). In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the approx. 10(exp -2) level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission. Subject headings: dark ages, reionization, first stars - methods: data analysis - methods: statistical

  11. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Astrophysics Data System (ADS)

    Switzer, Eric R.; Liu, Adrian

    2014-10-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z ~ 6-30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line of sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least ~10-4. In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the ~10-2 level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission.

  12. Erasing the variable: empirical foreground discovery for global 21 cm spectrum experiments

    SciTech Connect

    Switzer, Eric R.; Liu, Adrian

    2014-10-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z ∼ 6-30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line of sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least ∼10{sup –4}. In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the ∼10{sup –2} level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission.

  13. A correlation between the H I 21-cm absorption strength and impact parameter in external galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Reeves, S. N.; Allison, J. R.; Sadler, E. M.

    2016-07-01

    By combining the data from surveys for H I 21-cm absorption at various impact parameters in near-by galaxies, we report an anti-correlation between the 21-cm absorption strength (velocity integrated optical depth) and the impact parameter. Also, by combining the 21-cm absorption strength with that of the emission, giving the neutral hydrogen column density, N_{H I}, we find no evidence that the spin temperature of the gas (degenerate with the covering factor) varies significantly across the disc. This is consistent with the uniformity of spin temperature measured across the Galactic disc. Furthermore, comparison with the Galactic N_{H I} distribution suggests that intervening 21-cm absorption preferentially arises in discs of high inclinations (near face-on). We also investigate the hypothesis that 21-cm absorption is favourably detected towards compact radio sources. Although there is insufficient data to determine whether there is a higher detection rate towards quasar, rather than radio galaxy, sight-lines, the 21-cm detections intervene objects with a mean turnover frequency of < ν _{_TO}rangle ≈ 5× 108 Hz, compared to < ν _{_TO}rangle ≈ 1× 108 Hz for the non-detections. Since the turnover frequency is anti-correlated with radio source size, this does indicate a preferential bias for detection towards compact background radio sources.

  14. Unveiling the nature of dark matter with high redshift 21 cm line experiments

    SciTech Connect

    Evoli, C.; Mesinger, A.; Ferrara, A. E-mail: andrei.mesinger@sns.it

    2014-11-01

    Observations of the redshifted 21 cm line from neutral hydrogen will open a new window on the early Universe. By influencing the thermal and ionization history of the intergalactic medium (IGM), annihilating dark matter (DM) can leave a detectable imprint in the 21 cm signal. Building on the publicly available 21cmFAST code, we compute the 21 cm signal for a 10 GeV WIMP DM candidate. The most pronounced role of DM annihilations is in heating the IGM earlier and more uniformly than astrophysical sources of X-rays. This leaves several unambiguous, qualitative signatures in the redshift evolution of the large-scale (k ≅ 0.1 Mpc{sup -1}) 21 cm power amplitude: (i) the local maximum (peak) associated with IGM heating can be lower than the other maxima; (ii) the heating peak can occur while the IGM is in emission against the cosmic microwave background (CMB); (iii) there can be a dramatic drop in power (a global minimum) corresponding to the epoch when the IGM temperature is comparable to the CMB temperature. These signatures are robust to astrophysical uncertainties, and will be easily detectable with second generation interferometers. We also briefly show that decaying warm dark matter has a negligible role in heating the IGM.

  15. The multifrequency angular power spectrum of the epoch of reionization 21-cm signal

    NASA Astrophysics Data System (ADS)

    Datta, Kanan K.; Choudhury, T. Roy; Bharadwaj, Somnath

    2007-06-01

    Observations of redshifted 21-cm radiation from neutral hydrogen (HI) at high redshifts is an important future probe of reionization. We consider the multifrequency angular power spectrum (MAPS) to quantify the statistics of the HI signal as a joint function of the angular multipole l and frequency separation Δν. The signal at two different frequencies is expected to decorrelate as Δν is increased, and quantifying this is particularly important in deciding the frequency resolution for future HI observations. This is also expected to play a very crucial role in extracting the signal from foregrounds as the signal is expected to decorrelate much faster than the foregrounds (which are largely continuum sources) with increasing Δν. In this paper, we develop formulae relating MAPS to different components of the 3D HI power spectrum taking into account HI peculiar velocities. We show that the flat-sky approximation provides a very good representation over the angular scales of interest, and a final expression which is very simple to calculate and interpret. We present results for z = 10 assuming a neutral hydrogen fraction of 0.6 considering two models for the HI distribution, namely, (i) DM: where HI traces the dark matter and (ii) PR: where the effects of patchy reionization are incorporated through two parameters which are the bubble size and the clustering of the bubble centres relative to the dark matter (bias), respectively. We find that while the DM signal is largely featureless, the PR signal peaks at the angular scales of the individual bubbles where it is Poisson fluctuation dominated, and the signal is considerably enhanced for large bubble size. For most cases of interest at l ~ 100 the signal is uncorrelated beyond Δν ~ 1MHz or even less, whereas this occurs around ~0.1MHz at l ~ 103. The Δν dependence also carries an imprint of the bubble size and the bias, and is expected to be an important probe of the reionization scenario. Finally, we find

  16. Contributions of dark matter annihilation within ultracompact minihalos to the 21 cm background signal

    NASA Astrophysics Data System (ADS)

    Yang, Yupeng

    2016-12-01

    In the dark age of the Universe, any exotic sources, e.g. the dark matter annihilation, which inject the energy into the intergalactic medium (IGM) will left some imprint on the 21cm background signal. Recently, one new kind of dark matter structure named ultracompact dark matter minihalos (UCMHs) was proposed. Near the inner part of UCMHs, the distribution of dark matter particles is steeper than that of the general dark matter halos, ρ_{UCMHs}(r) ˜ r^{-2.25}, and the formation time of UCMHs is earlier, zc ˜ 1000. Therefore, it is excepted that the dark matter annihilation within UCMHs can effect the 21cm background signal. In this paper, we investigated the contributions of the dark matter annihilation within UCMHs to the 21cm background signal.

  17. Constraining the redshifted 21-cm signal with the unresolved soft X-ray background

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Cohen, Aviad; Barkana, Rennan; Silk, Joseph

    2017-01-01

    We use the observed unresolved cosmic X-ray background (CXRB) in the 0.5-2 keV band and existing upper limits on the 21-cm power spectrum to constrain the high-redshift population of X-ray sources, focusing on their effect on the thermal history of the Universe and the cosmic 21-cm signal. Because the properties of these sources are poorly constrained, we consider hot gas, X-ray binaries and mini-quasars (i.e. sources with soft or hard X-ray spectra) as possible candidates. We find that (1) the soft-band CXRB sets an upper limit on the X-ray efficiency of sources that existed before the end of reionization, which is one-to-two orders of magnitude higher than typically assumed efficiencies, (2) hard sources are more effective in generating the CXRB than the soft ones, (3) the commonly assumed limit of saturated heating is not valid during the first half of reionization in the case of hard sources, with any allowed value of X-ray efficiency, (4) the maximal allowed X-ray efficiency sets a lower limit on the depth of the absorption trough in the global 21-cm signal and an upper limit on the height of the emission peak, while in the 21-cm power spectrum it sets a minimum amplitude and frequency for the high-redshift peaks, and (5) the existing upper limit on the 21-cm power spectrum sets a lower limit on the X-ray efficiency for each model. When combined with the 21-cm global signal, the CXRB will be useful for breaking degeneracies and helping constrain the nature of high-redshift heating sources.

  18. A Polarimetric Approach for Constraining the Dynamic Foreground Spectrum for Cosmological Global 21 cm Measurements

    NASA Astrophysics Data System (ADS)

    Nhan, Bang D.; Bradley, Richard F.; Burns, Jack O.

    2017-02-01

    The cosmological global (sky-averaged) 21 cm signal is a powerful tool to probe the evolution of the intergalactic medium in high-redshift universe (z≤slant 6). One of the biggest observational challenges is to remove the foreground spectrum which is at least four orders of magnitude brighter than the cosmological 21 cm emission. Conventional global 21 cm experiments rely on the spectral smoothness of the foreground synchrotron emission to separate it from the unique 21 cm spectral structures in a single total-power spectrum. However, frequency-dependent instrumental and observational effects are known to corrupt such smoothness and complicate the foreground subtraction. We introduce a polarimetric approach to measure the projection-induced polarization of the anisotropic foreground onto a stationary dual-polarized antenna. Due to Earth rotation, when pointing the antenna at a celestial pole, the revolving foreground will modulate this polarization with a unique frequency-dependent sinusoidal signature as a function of time. In our simulations, by harmonic decomposing this dynamic polarization, our technique produces two separate spectra in parallel from the same observation: (i) a total sky power consisting both the foreground and the 21 cm background and (ii) a model-independent measurement of the foreground spectrum at a harmonic consistent to twice the sky rotation rate. In the absence of any instrumental effects, by scaling and subtracting the latter from the former, we recover the injected global 21 cm model within the assumed uncertainty. We further discuss several limiting factors and potential remedies for future implementation.

  19. From Darkness to Light: Signatures of the Universe's First Galaxies in the Cosmic 21-cm Background

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    Within the first billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this Epoch of Reionization -- the emergence of the first stars, black holes, and full-fledged galaxies -- are expected to manifest as spectral "turning points" in the sky-averaged ("global") 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) required to model the signal. In this thesis, I make the first attempt to build the final piece of a global 21-cm data analysis pipeline: an inference tool capable of extracting the properties of the IGM and the Universe's first galaxies from the recovered signal. Such a framework is valuable even prior to a detection of the global 21-cm signal as it enables end-to-end simulations of 21-cm observations that can be used to optimize the design of upcoming instruments, their observing strategies, and their signal extraction algorithms. En route to a complete pipeline, I found that (1) robust limits on the physical properties of the IGM, such as its temperature and ionization state, can be derived analytically from the 21-cm turning points within two-zone models for the IGM, (2) improved constraints on the IGM properties can be obtained through simultaneous fitting of the global 21-cm signal and foregrounds, though biases can emerge depending on the parameterized form of the signal one adopts, (3) a simple four-parameter galaxy formation model can be constrained in only 100 hours of integration provided a stable instrumental response over a broad frequency range (~80 MHz), and (4) frequency-dependent RT solutions in physical models for the global 21-cm signal will be required to properly interpret the 21-cm absorption minimum, as the IGM thermal history is highly sensitive to the

  20. Elucidating dark energy with future 21 cm observations at the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo

    2017-02-01

    We investigate how precisely we can determine the nature of dark energy such as the equation of state (EoS) and its time dependence by using future observations of 21 cm fluctuations at the epoch of reionization (06.8lesssim zlesssim1) such as Square Kilometre Array (SKA) and Omniscope in combination with those from cosmic microwave background, baryon acoustic oscillation, type Ia supernovae and direct measurement of the Hubble constant. We consider several parametrizations for the EoS and find that future 21 cm observations will be powerful in constraining models of dark energy, especially when its EoS varies at high redshifts.

  1. Canadian Hydrogen Intensity Mapping Experiment (CHIME) pathfinder

    NASA Astrophysics Data System (ADS)

    Bandura, Kevin; Addison, Graeme E.; Amiri, Mandana; Bond, J. Richard; Campbell-Wilson, Duncan; Connor, Liam; Cliche, Jean-François; Davis, Greg; Deng, Meiling; Denman, Nolan; Dobbs, Matt; Fandino, Mateus; Gibbs, Kenneth; Gilbert, Adam; Halpern, Mark; Hanna, David; Hincks, Adam D.; Hinshaw, Gary; Höfer, Carolin; Klages, Peter; Landecker, Tom L.; Masui, Kiyoshi; Mena Parra, Juan; Newburgh, Laura B.; Pen, Ue-li; Peterson, Jeffrey B.; Recnik, Andre; Shaw, J. Richard; Sigurdson, Kris; Sitwell, Mike; Smecher, Graeme; Smegal, Rick; Vanderlinde, Keith; Wiebe, Don

    2014-07-01

    A pathfinder version of CHIME (the Canadian Hydrogen Intensity Mapping Experiment) is currently being commissioned at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, BC. The instrument is a hybrid cylindrical interferometer designed to measure the large scale neutral hydrogen power spectrum across the redshift range 0.8 to 2.5. The power spectrum will be used to measure the baryon acoustic oscillation (BAO) scale across this poorly probed redshift range where dark energy becomes a significant contributor to the evolution of the Universe. The instrument revives the cylinder design in radio astronomy with a wide field survey as a primary goal. Modern low-noise amplifiers and digital processing remove the necessity for the analog beam forming that characterized previous designs. The Pathfinder consists of two cylinders 37m long by 20m wide oriented north-south for a total collecting area of 1,500 square meters. The cylinders are stationary with no moving parts, and form a transit instrument with an instantaneous field of view of ~100 degrees by 1-2 degrees. Each CHIME Pathfinder cylinder has a feedline with 64 dual polarization feeds placed every ~30 cm which Nyquist sample the north-south sky over much of the frequency band. The signals from each dual-polarization feed are independently amplified, filtered to 400-800 MHz, and directly sampled at 800 MSps using 8 bits. The correlator is an FX design, where the Fourier transform channelization is performed in FPGAs, which are interfaced to a set of GPUs that compute the correlation matrix. The CHIME Pathfinder is a 1/10th scale prototype version of CHIME and is designed to detect the BAO feature and constrain the distance-redshift relation. The lessons learned from its implementation will be used to inform and improve the final CHIME design.

  2. THE IMPORTANCE OF WIDE-FIELD FOREGROUND REMOVAL FOR 21 cm COSMOLOGY: A DEMONSTRATION WITH EARLY MWA EPOCH OF REIONIZATION OBSERVATIONS

    SciTech Connect

    Pober, J. C.; Hazelton, B. J.; Beardsley, A. P.; Barry, N. A.; Martinot, Z. E.; Sullivan, I. S.; Morales, M. F.; Carroll, P.; Bell, M. E.; Bernardi, G.; Bhat, N. D. R.; Emrich, D.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Corey, B. E.; De Oliveira-Costa, A.; Deshpande, A. A.; Dillon, Joshua S.; Ewall-Wice, A. M.; and others

    2016-03-01

    In this paper we present observations, simulations, and analysis demonstrating the direct connection between the location of foreground emission on the sky and its location in cosmological power spectra from interferometric redshifted 21 cm experiments. We begin with a heuristic formalism for understanding the mapping of sky coordinates into the cylindrically averaged power spectra measurements used by 21 cm experiments, with a focus on the effects of the instrument beam response and the associated sidelobes. We then demonstrate this mapping by analyzing power spectra with both simulated and observed data from the Murchison Widefield Array. We find that removing a foreground model that includes sources in both the main field of view and the first sidelobes reduces the contamination in high k{sub ∥} modes by several per cent relative to a model that only includes sources in the main field of view, with the completeness of the foreground model setting the principal limitation on the amount of power removed. While small, a percent-level amount of foreground power is in itself more than enough to prevent recovery of any Epoch of Reionization signal from these modes. This result demonstrates that foreground subtraction for redshifted 21 cm experiments is truly a wide-field problem, and algorithms and simulations must extend beyond the instrument’s main field of view to potentially recover the full 21 cm power spectrum.

  3. The Importance of Wide-field Foreground Removal for 21 cm Cosmology: A Demonstration with Early MWA Epoch of Reionization Observations

    NASA Astrophysics Data System (ADS)

    Pober, J. C.; Hazelton, B. J.; Beardsley, A. P.; Barry, N. A.; Martinot, Z. E.; Sullivan, I. S.; Morales, M. F.; Bell, M. E.; Bernardi, G.; Bhat, N. D. R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Deshpande, A. A.; Dillon, Joshua. S.; Emrich, D.; Ewall-Wice, A. M.; Feng, L.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, Han-Seek; Kittiwisit, P.; Kratzenberg, E.; Kudryavtseva, N.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, Sourabh; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Sethi, Shiv K.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wyithe, J. S. B.

    2016-03-01

    In this paper we present observations, simulations, and analysis demonstrating the direct connection between the location of foreground emission on the sky and its location in cosmological power spectra from interferometric redshifted 21 cm experiments. We begin with a heuristic formalism for understanding the mapping of sky coordinates into the cylindrically averaged power spectra measurements used by 21 cm experiments, with a focus on the effects of the instrument beam response and the associated sidelobes. We then demonstrate this mapping by analyzing power spectra with both simulated and observed data from the Murchison Widefield Array. We find that removing a foreground model that includes sources in both the main field of view and the first sidelobes reduces the contamination in high k∥ modes by several per cent relative to a model that only includes sources in the main field of view, with the completeness of the foreground model setting the principal limitation on the amount of power removed. While small, a percent-level amount of foreground power is in itself more than enough to prevent recovery of any Epoch of Reionization signal from these modes. This result demonstrates that foreground subtraction for redshifted 21 cm experiments is truly a wide-field problem, and algorithms and simulations must extend beyond the instrument’s main field of view to potentially recover the full 21 cm power spectrum.

  4. Extraction of global 21-cm signal from simulated data for the Dark Ages Radio Explorer (DARE) using an MCMC pipeline

    NASA Astrophysics Data System (ADS)

    Tauscher, Keith A.; Burns, Jack O.; Rapetti, David; Mirocha, Jordan; Monsalve, Raul A.

    2017-01-01

    The Dark Ages Radio Explorer (DARE) is a mission concept proposed to NASA in which a crossed dipole antenna collects low frequency (40-120 MHz) radio measurements above the farside of the Moon to detect and characterize the global 21-cm signal from the early (z~35-11) Universe's neutral hydrogen. Simulated data for DARE includes: 1) the global signal modeled using the ares code, 2) spectrally smooth Galactic foregrounds with spatial structure taken from multiple radio foreground maps averaged over a large, well characterized beam, 3) systematics introduced in the data by antenna/receiver reflections, and 4) the Moon. This simulated data is fed into a signal extraction pipeline. As the signal is 4-5 orders of magnitude below the Galactic synchrotron contribution, it is best extracted from the data using Bayesian techniques which take full advantage of prior knowledge of the instrument and foregrounds. For the DARE pipeline, we use the affine-invariant MCMC algorithm implemented in the Python package, emcee. The pipeline also employs singular value decomposition to use known spectral features of the antenna and receiver to form a natural basis with which to fit instrumental systematics. Taking advantage of high-fidelity measurements of the antenna beam (to ~20 ppm) and precise calibration of the instrument, the pipeline extracts the global 21-cm signal with an average RMS error of 10-15 mK for multiple signal models.

  5. Bayesian constraints on the global 21-cm signal from the Cosmic Dawn

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; Zwart, J. T. L.; Price, D.; Greenhill, L. J.; Mesinger, A.; Dowell, J.; Eftekhari, T.; Ellingson, S. W.; Kocz, J.; Schinzel, F.

    2016-09-01

    The birth of the first luminous sources and the ensuing epoch of reionization are best studied via the redshifted 21-cm emission line, the signature of the first two imprinting the last. In this work, we present a fully Bayesian method, HIBAYES, for extracting the faint, global (sky-averaged) 21-cm signal from the much brighter foreground emission. We show that a simplified (but plausible) Gaussian model of the 21-cm emission from the Cosmic Dawn epoch (15 ≲ z ≲ 30), parametrized by an amplitude A_{H I}, a frequency peak ν _{H I} and a width σ _{H I}, can be extracted even in the presence of a structured foreground frequency spectrum (parametrized as a seventh-order polynomial), provided sufficient signal-to-noise (400 h of observation with a single dipole). We apply our method to an early, 19-min-long observation from the Large aperture Experiment to detect the Dark Ages, constraining the 21-cm signal amplitude and width to be -890 < A_{H I} < 0 mK and σ _{H I} > 6.5 MHz (corresponding to Δz > 1.9 at redshift z ≃ 20) respectively at the 95-per cent confidence level in the range 13.2 < z < 27.4 (100 > ν > 50 MHz).

  6. Reionization on large scales. IV. Predictions for the 21 cm signal incorporating the light cone effect

    SciTech Connect

    La Plante, P.; Battaglia, N.; Natarajan, A.; Peterson, J. B.; Trac, H.; Cen, R.; Loeb, A.

    2014-07-01

    We present predictions for the 21 cm brightness temperature power spectrum during the Epoch of Reionization (EoR). We discuss the implications of the 'light cone' effect, which incorporates evolution of the neutral hydrogen fraction and 21 cm brightness temperature along the line of sight. Using a novel method calibrated against radiation-hydrodynamic simulations, we model the neutral hydrogen density field and 21 cm signal in large volumes (L = 2 Gpc h {sup –1}). The inclusion of the light cone effect leads to a relative decrease of about 50% in the 21 cm power spectrum on all scales. We also find that the effect is more prominent at the midpoint of reionization and later. The light cone effect can also introduce an anisotropy along the line of sight. By decomposing the 3D power spectrum into components perpendicular to and along the line of sight, we find that in our fiducial reionization model, there is no significant anisotropy. However, parallel modes can contribute up to 40% more power for shorter reionization scenarios. The scales on which the light cone effect is relevant are comparable to scales where one measures the baryon acoustic oscillation. We argue that due to its large comoving scale and introduction of anisotropy, the light cone effect is important when considering redshift space distortions and future application to the Alcock-Paczyński test for the determination of cosmological parameters.

  7. The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes

    NASA Astrophysics Data System (ADS)

    Lopez-Honorez, Laura; Mena, Olga; Moliné, Ángeles; Palomares-Ruiz, Sergio; Vincent, Aaron C.

    2016-08-01

    Future dedicated radio interferometers, including HERA and SKA, are very promising tools that aim to study the epoch of reionization and beyond via measurements of the 21 cm signal from neutral hydrogen. Dark matter (DM) annihilations into charged particles change the thermal history of the Universe and, as a consequence, affect the 21 cm signal. Accurately predicting the effect of DM strongly relies on the modeling of annihilations inside halos. In this work, we use up-to-date computations of the energy deposition rates by the products from DM annihilations, a proper treatment of the contribution from DM annihilations in halos, as well as values of the annihilation cross section allowed by the most recent cosmological measurements from the Planck satellite. Given current uncertainties on the description of the astrophysical processes driving the epochs of reionization, X-ray heating and Lyman-α pumping, we find that disentangling DM signatures from purely astrophysical effects, related to early-time star formation processes or late-time galaxy X-ray emissions, will be a challenging task. We conclude that only annihilations of DM particles with masses of ~100 MeV, could leave an unambiguous imprint on the 21 cm signal and, in particular, on the 21 cm power spectrum. This is in contrast to previous, more optimistic results in the literature, which have claimed that strong signatures might also be present even for much higher DM masses. Additional measurements of the 21 cm signal at different cosmic epochs will be crucial in order to break the strong parameter degeneracies between DM annihilations and astrophysical effects and undoubtedly single out a DM imprint for masses different from ~100 MeV.

  8. Simulating the 21 cm signal from reionization including non-linear ionizations and inhomogeneous recombinations

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2016-04-01

    We explore the impact of incorporating physically motivated ionization and recombination rates on the history and topology of cosmic reionization and the resulting 21 cm power spectrum, by incorporating inputs from small-volume hydrodynamic simulations into our semi-numerical code, SIMFAST21, that evolves reionization on large scales. We employ radiative hydrodynamic simulations to parametrize the ionization rate Rion and recombination rate Rrec as functions of halo mass, overdensity and redshift. We find that Rion scales superlinearly with halo mass ({R_ion}∝ M_h^{1.41}), in contrast to previous assumptions. Implementing these scalings into SIMFAST21, we tune our one free parameter, the escape fraction fesc, to simultaneously reproduce recent observations of the Thomson optical depth, ionizing emissivity and volume-averaged neutral fraction by the end of reionization. This yields f_esc=4^{+7}_{-2} per cent averaged over our 0.375 h-1 Mpc cells, independent of halo mass or redshift, increasing to 6 per cent if we also constrain to match the observed z = 7 star formation rate function. Introducing superlinear Rion increases the duration of reionization and boosts small-scale 21 cm power by two to three times at intermediate phases of reionization, while inhomogeneous recombinations reduce ionized bubble sizes and suppress large-scale 21 cm power by two to three times. Gas clumping on sub-cell scales has a minimal effect on the 21 cm power. Superlinear Rion also significantly increases the median halo mass scale for ionizing photon output to ˜ 1010 M⊙, making the majority of reionizing sources more accessible to next-generation facilities. These results highlight the importance of accurately treating ionizing sources and recombinations for modelling reionization and its 21 cm power spectrum.

  9. The TIME-Pilot intensity mapping experiment

    NASA Astrophysics Data System (ADS)

    Crites, A. T.; Bock, J. J.; Bradford, C. M.; Chang, T. C.; Cooray, A. R.; Duband, L.; Gong, Y.; Hailey-Dunsheath, S.; Hunacek, J.; Koch, P. M.; Li, C. T.; O'Brient, R. C.; Prouve, T.; Shirokoff, E.; Silva, M. B.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2014-08-01

    TIME-Pilot is designed to make measurements from the Epoch of Reionization (EoR), when the first stars and galaxies formed and ionized the intergalactic medium. This will be done via measurements of the redshifted 157.7 um line of singly ionized carbon ([CII]). In particular, TIME-Pilot will produce the first detection of [CII] clustering fluctuations, a signal proportional to the integrated [CII] intensity, summed over all EoR galaxies. TIME-Pilot is thus sensitive to the emission from dwarf galaxies, thought to be responsible for the balance of ionizing UV photons, that will be difficult to detect individually with JWST and ALMA. A detection of [CII] clustering fluctuations would validate current theoretical estimates of the [CII] line as a new cosmological observable, opening the door for a new generation of instruments with advanced technology spectroscopic array focal planes that will map [CII] fluctuations to probe the EoR history of star formation, bubble size, and ionization state. Additionally, TIME-Pilot will produce high signal-to-noise measurements of CO clustering fluctuations, which trace the role of molecular gas in star-forming galaxies at redshifts 0 < z < 2. With its unique atmospheric noise mitigation, TIME-Pilot also significantly improves sensitivity for measuring the kinetic Sunyaev-Zel'dovich (kSZ) effect in galaxy clusters. TIME-Pilot will employ a linear array of spectrometers, each consisting of a parallel-plate diffraction grating. The spectrometer bandwidth covers 185-323 GHz to both probe the entire redshift range of interest and to include channels at the edges of the band for atmospheric noise mitigation. We illuminate the telescope with f/3 horns, which balances the desire to both couple to the sky with the best efficiency per beam, and to pack a large number of horns into the fixed field of view. Feedhorns couple radiation to the waveguide spectrometer gratings. Each spectrometer grating has 190 facets and provides resolving power

  10. A Green Bank Telescope 21cm survey of HI clouds in the Milky Way's nuclear wind

    NASA Astrophysics Data System (ADS)

    Denbo, Sara; Endsley, Ryan; Lockman, Felix J.; Ford, Alyson

    2015-01-01

    Feedback processes such as large-scale galactic winds are thought to be responsible for distributing enriched gas throughout a galaxy and even into the IGM. Such winds have been found in many galaxies with active star formation near their center, and the Fermi bubbles provide evidence for such a nuclear wind in our own Milky Way. A recent 21 cm HI survey by the Australia Telescope Compact Array discovered a population of compact, isolated clouds surrounding the Galactic Center that may be entrained in the Fermi bubble wind. We present data from a survey of 21cm HI over an extended region around the Galactic Center using the Green Bank Telescope. These observations provide more strict constraints on neutral clouds in the Fermi bubble wind, and a more robust description of the parameters of HI clouds (i.e., mass, column density, and lifetime) near the Galactic Center.

  11. Modelling the 21-cm Signal from the Epoch of Reionization and Cosmic Dawn

    NASA Astrophysics Data System (ADS)

    Choudhury, T. Roy; Datta, Kanan; Majumdar, Suman; Ghara, Raghunath; Paranjape, Aseem; Mondal, Rajesh; Bharadwaj, Somnath; Samui, Saumyadip

    2016-12-01

    Studying the cosmic dawn and the epoch of reionization through the redshifted 21-cm line are among the major science goals of the SKA1. Their significance lies in the fact that they are closely related to the very first stars in the Universe. Interpreting the upcoming data would require detailed modelling of the relevant physical processes. In this article, we focus on the theoretical models of reionization that have been worked out by various groups working in India with the upcoming SKA in mind. These models include purely analytical and semi-numerical calculations as well as fully numerical radiative transfer simulations. The predictions of the 21-cm signal from these models would be useful in constraining the properties of the early galaxies using the SKA data.

  12. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  13. The imprint of the cosmic supermassive black hole growth history on the 21 cm background radiation

    NASA Astrophysics Data System (ADS)

    Tanaka, Takamitsu L.; O'Leary, Ryan M.; Perna, Rosalba

    2016-01-01

    The redshifted 21 cm transition line of hydrogen tracks the thermal evolution of the neutral intergalactic medium (IGM) at `cosmic dawn', during the emergence of the first luminous astrophysical objects (˜100 Myr after the big bang) but before these objects ionized the IGM (˜400-800 Myr after the big bang). Because X-rays, in particular, are likely to be the chief energy courier for heating the IGM, measurements of the 21 cm signature can be used to infer knowledge about the first astrophysical X-ray sources. Using analytic arguments and a numerical population synthesis algorithm, we argue that the progenitors of supermassive black holes (SMBHs) should be the dominant source of hard astrophysical X-rays - and thus the primary driver of IGM heating and the 21 cm signature - at redshifts z ≳ 20, if (i) they grow readily from the remnants of Population III stars and (ii) produce X-rays in quantities comparable to what is observed from active galactic nuclei and high-mass X-ray binaries. We show that models satisfying these assumptions dominate over contributions to IGM heating from stellar populations, and cause the 21 cm brightness temperature to rise at z ≳ 20. An absence of such a signature in the forthcoming observational data would imply that SMBH formation occurred later (e.g. via so-called direct collapse scenarios), that it was not a common occurrence in early galaxies and protogalaxies, or that it produced far fewer X-rays than empirical trends at lower redshifts, either due to intrinsic dimness (radiative inefficiency) or Compton-thick obscuration close to the source.

  14. Detecting the integrated Sachs-Wolfe effect with high-redshift 21-cm surveys

    NASA Astrophysics Data System (ADS)

    Raccanelli, Alvise; Kovetz, Ely; Dai, Liang; Kamionkowski, Marc

    2016-04-01

    We investigate the possibility of detecting the integrated Sachs-Wolfe (ISW) effect by cross-correlating 21-cm surveys at high redshifts with galaxies in a way similar to the usual CMB-galaxy cross-correlation. The high-redshift 21-cm signal is dominated by CMB photons that travel freely without interacting with the intervening matter, and hence its late-time ISW signature should correlate extremely well with that of the CMB at its peak frequencies. Using the 21-cm temperature brightness instead of the CMB would thus be a further check of the detection of the ISW effect, measured by different instruments at different frequencies and suffering from different systematics. We also study the ISW effect on the photons that are scattered by HI clouds. We show that a detection of the unscattered photons is achievable with planned radio arrays, while one using scattered photons will require advanced radio interferometers, either an extended version of the planned Square Kilometre Array or futuristic experiments such as a lunar radio array.

  15. Cosmic reionization on computers. Mean and fluctuating redshifted 21 CM signal

    DOE PAGES

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-20

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ~ 10–15 only extends tomore » $$\\langle {\\rm{\\Delta }}{T}_{B}\\rangle \\sim -25\\,{\\rm{mK}}$$, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ~ 0.1–1h Mpc–1. Furthermore, this scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.« less

  16. Z > 6 Galaxy Signatures in the Infrared Background and the 21-cm background

    NASA Astrophysics Data System (ADS)

    Cooray, A.

    2006-08-01

    We will discuss the signatures of the high-redshift galaxy formation in the near-infrared background. Ionizing sources at high redshift generically imprint a distinctive Lyman-cutoff feature and a unique spatial anisotropy signature to the IRB, both of which may be detectable in a short rocket flight. We will discuss the Cosmic Infrared Background ExpeRiment (CIBER), a rocket-borne instrument to probe the absolute spectrum and spatial anisotropy of the extragalactic InfraRed Background (IRB) optimized for detection of the integrated spatial anisotropies in the IR background from high-redshift galaxies. We will also discuss the signatures of first galaxies in the low radio frequency 21-cm background from the neutral Hydrogen distribution at z > 6; When combined with arcminute-scale temperature anisotropy and the polarization of the cosmic microwave background, the 21-cm background will allow a determination of inhomogeneous distribution of Lyman-alpha photons from first galaxies. We will discuss these and other possibilities to understand the first galaxy population with IR, 21-cm, and CMB backgrounds.

  17. OPENING THE 21 cm EPOCH OF REIONIZATION WINDOW: MEASUREMENTS OF FOREGROUND ISOLATION WITH PAPER

    SciTech Connect

    Pober, Jonathan C.; Parsons, Aaron R.; Ali, Zaki; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, Dave; Dexter, Matthew; MacMahon, Dave; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Patricia J.; Manley, Jason; Walbrugh, William P.; Stefan, Irina I.

    2013-05-10

    We present new observations with the Precision Array for Probing the Epoch of Reionization with the aim of measuring the properties of foreground emission for 21 cm epoch of reionization (EoR) experiments at 150 MHz. We focus on the footprint of the foregrounds in cosmological Fourier space to understand which modes of the 21 cm power spectrum will most likely be compromised by foreground emission. These observations confirm predictions that foregrounds can be isolated to a {sup w}edge{sup -}like region of two-dimensional (k , k{sub Parallel-To })-space, creating a window for cosmological studies at higher k{sub Parallel-To} values. We also find that the emission extends past the nominal edge of this wedge due to spectral structure in the foregrounds, with this feature most prominent on the shortest baselines. Finally, we filter the data to retain only this ''unsmooth'' emission and image its specific k{sub Parallel-To} modes. The resultant images show an excess of power at the lowest modes, but no emission can be clearly localized to any one region of the sky. This image is highly suggestive that the most problematic foregrounds for 21 cm EoR studies will not be easily identifiable bright sources, but rather an aggregate of fainter emission.

  18. Signatures of modified gravity on the 21 cm power spectrum at reionisation

    SciTech Connect

    Brax, Philippe

    2013-01-01

    Scalar modifications of gravity have an impact on the growth of structure. Baryon and Cold Dark Matter (CDM) perturbations grow anomalously for scales within the Compton wavelength of the scalar field. In the late time Universe when reionisation occurs, the spectrum of the 21 cm brightness temperature is thus affected. We study this effect for chameleon-f(R) models, dilatons and symmetrons. Although the f(R) models are more tightly constrained by solar system bounds, and effects on dilaton models are negligible, we find that symmetrons where the phase transition occurs before z{sub *} ∼ 12 could be detectable for a scalar field range as low as 5kpc. For all these models, the detection prospects of modified gravity effects are higher when considering modes parallel to the line of sight where very small scales can be probed. The study of the 21 cm spectrum thus offers a complementary approach to testing modified gravity with large scale structure surveys. Short scales, which would be highly non-linear in the very late time Universe when structure forms and where modified gravity effects are screened, appear in the linear spectrum of 21 cm physics, hence deviating from General Relativity in a maximal way.

  19. Cosmic reionization on computers. Mean and fluctuating redshifted 21 CM signal

    SciTech Connect

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-20

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ~ 10–15 only extends to $\\langle {\\rm{\\Delta }}{T}_{B}\\rangle \\sim -25\\,{\\rm{mK}}$, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ~ 0.1–1h Mpc–1. Furthermore, this scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.

  20. Statistics of the epoch of reionization (EoR) 21-cm signal - II. The evolution of the power-spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2017-01-01

    The epoch of reionization (EoR) 21-cm signal is expected to become highly non-Gaussian as reionization progresses. This severely affects the error-covariance of the EoR 21-cm power spectrum that is important for predicting the prospects of a detection with ongoing and future experiments. Most earlier works have assumed that the EoR 21-cm signal is a Gaussian random field where (1) the error-variance depends only on the power spectrum and the number of Fourier modes in the particular k bin, and (2) the errors in the different k bins are uncorrelated. Here, we use an ensemble of simulated 21-cm maps to analyse the error-covariance at various stages of reionization. We find that even at the very early stages of reionization (bar{x}_{H I}˜ 0.9), the error-variance significantly exceeds the Gaussian predictions at small length-scales (k > 0.5 Mpc-1) while they are consistent at larger scales. The errors in most k bins (both large and small scales) are however found to be correlated. Considering the later stages (bar{x}_{H I}=0.15), the error-variance shows an excess in all k bins within k ≥ 0.1 Mpc-1, and it is around 200 times larger than the Gaussian prediction at k ˜ 1 Mpc-1. The errors in the different k bins are all also highly correlated, barring the two smallest k bins that are anti-correlated with the other bins. Our results imply that the predictions for different 21-cm experiments based on the Gaussian assumption underestimate the errors, and it is necessary to incorporate the non-Gaussianity for more realistic predictions.

  1. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  2. Challenges and opportunities in mapping land use intensity globally☆

    PubMed Central

    Kuemmerle, Tobias; Erb, Karlheinz; Meyfroidt, Patrick; Müller, Daniel; Verburg, Peter H; Estel, Stephan; Haberl, Helmut; Hostert, Patrick; Jepsen, Martin R.; Kastner, Thomas; Levers, Christian; Lindner, Marcus; Plutzar, Christoph; Verkerk, Pieter Johannes; van der Zanden, Emma H; Reenberg, Anette

    2013-01-01

    Future increases in land-based production will need to focus more on sustainably intensifying existing production systems. Unfortunately, our understanding of the global patterns of land use intensity is weak, partly because land use intensity is a complex, multidimensional term, and partly because we lack appropriate datasets to assess land use intensity across broad geographic extents. Here, we review the state of the art regarding approaches for mapping land use intensity and provide a comprehensive overview of available global-scale datasets on land use intensity. We also outline major challenges and opportunities for mapping land use intensity for cropland, grazing, and forestry systems, and identify key issues for future research. PMID:24143157

  3. ON THE DETECTION OF GLOBAL 21-cm SIGNAL FROM REIONIZATION USING INTERFEROMETERS

    SciTech Connect

    Singh, Saurabh; Subrahmanyan, Ravi; Shankar, N. Udaya; Raghunathan, A.

    2015-12-20

    Detection of the global redshifted 21-cm signal is an excellent means of deciphering the physical processes during the Dark Ages and subsequent Epoch of Reionization (EoR). However, detection of this faint monopole is challenging due to the high precision required in instrumental calibration and modeling of substantially brighter foregrounds and instrumental systematics. In particular, modeling of receiver noise with mK accuracy and its separation remains a formidable task in experiments aiming to detect the global signal using single-element spectral radiometers. Interferometers do not respond to receiver noise; therefore, here we explore the theory of the response of interferometers to global signals. In other words, we discuss the spatial coherence in the electric field arising from the monopole component of the 21-cm signal and methods for its detection using sensor arrays. We proceed by first deriving the response to uniform sky of two-element interferometers made of unit dipole and resonant loop antennas, then extend the analysis to interferometers made of one-dimensional arrays and also consider two-dimensional aperture antennas. Finally, we describe methods by which the coherence might be enhanced so that the interferometer measurements yield improved sensitivity to the monopole component. We conclude (a) that it is indeed possible to measure the global 21-cm from EoR using interferometers, (b) that a practically useful configuration is with omnidirectional antennas as interferometer elements, and (c) that the spatial coherence may be enhanced using, for example, a space beam splitter between the interferometer elements.

  4. A comparative study of intervening and associated H I 21-cm absorption profiles in redshifted galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Duchesne, S. W.; Divoli, A.; Allison, J. R.

    2016-11-01

    The star-forming reservoir in the distant Universe can be detected through H I 21-cm absorption arising from either cool gas associated with a radio source or from within a galaxy intervening the sightline to the continuum source. In order to test whether the nature of the absorber can be predicted from the profile shape, we have compiled and analysed all of the known redshifted (z ≥ 0.1) H I 21-cm absorption profiles. Although between individual spectra there is too much variation to assign a typical spectral profile, we confirm that associated absorption profiles are, on average, wider than their intervening counterparts. It is widely hypothesized that this is due to high-velocity nuclear gas feeding the central engine, absent in the more quiescent intervening absorbers. Modelling the column density distribution of the mean associated and intervening spectra, we confirm that the additional low optical depth, wide dispersion component, typical of associated absorbers, arises from gas within the inner parsec. With regard to the potential of predicting the absorber type in the absence of optical spectroscopy, we have implemented machine learning techniques to the 55 associated and 43 intervening spectra, with each of the tested models giving a ≳ 80 per cent accuracy in the prediction of the absorber type. Given the impracticability of follow-up optical spectroscopy of the large number of 21-cm detections expected from the next generation of large radio telescopes, this could provide a powerful new technique with which to determine the nature of the absorbing galaxy.

  5. Extracting Physical Parameters for the First Galaxies from the Cosmic Dawn Global 21-cm Spectrum

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Mirocha, Jordan; harker, geraint; Tauscher, Keith; Datta, Abhirup

    2016-01-01

    The all-sky or global redshifted 21-cm HI signal is a potentially powerful probe of the first luminous objects and their environs during the transition from the Dark Ages to Cosmic Dawn (35 > z > 6). The first stars, black holes, and galaxies heat and ionize the surrounding intergalactic medium, composed mainly of neutral hydrogen, so the hyperfine 21-cm transition can be used to indirectly study these early radiation sources. The properties of these objects can be examined via the broad absorption and emission features that are expected in the spectrum. The Dark Ages Radio Explorer (DARE) is proposed to conduct these observations at low radio astronomy frequencies, 40-120 MHz, in a 125 km orbit about the Moon. The Moon occults both the Earth and the Sun as DARE makes observations above the lunar farside, thus eliminating the corrupting effects from Earth's ionosphere, radio frequency interference, and solar nanoflares. The signal is extracted from the galactic/extragalactic foreground employing Bayesian methods, including Markov Chain Monte Carlo (MCMC) techniques. Theory indicates that the 21-cm signal is well described by a model in which the evolution of various physical quantities follows a hyperbolic tangent (tanh) function of redshift. We show that this approach accurately captures degeneracies and covariances between parameters, including those related to the signal, foreground, and the instrument. Furthermore, we also demonstrate that MCMC fits will set meaningful constraints on the Ly-α, ionizing, and X-ray backgrounds along with the minimum virial temperature of the first star-forming halos.

  6. The global 21-cm signal in the context of the high- z galaxy luminosity function

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan; Furlanetto, Steven R.; Sun, Guochao

    2017-01-01

    We build a new model for the global 21-cm signal that is calibrated to measurements of the high-z galaxy luminosity function (LF) and further tuned to match the Thomson scattering optical depth of the cosmic microwave background, τe. Assuming that the z ≲ 8 galaxy population can be smoothly extrapolated to higher redshifts, the recent decline in best-fitting values of τe and the inefficient heating induced by X-ray binaries (the presumptive sources of the high-z X-ray background) imply that the entirety of cosmic reionization and reheating occurs at z ≲ 12. In contrast to past global 21-cm models, whose z ˜ 20 (ν ˜ 70 MHz) absorption features and strong ˜25 mK emission features were driven largely by the assumption of efficient early star formation and X-ray heating, our new models peak in absorption at ν ˜ 110 MHz at depths ˜-160 mK and have negligible emission components. Current uncertainties in the faint-end of the LF, binary populations in star-forming galaxies, and UV and X-ray escape fractions introduce ˜20 MHz (˜50 mK) deviations in the trough's frequency (amplitude), while emission signals remain weak (≲10 mK) and are confined to ν ≳ 140 MHz. These predictions, which are intentionally conservative, suggest that the detection of a 21-cm absorption minimum at frequencies below ˜90 MHz and/or emission signals stronger than ˜10mK at ν ≲ 140 MHz would provide strong evidence for `new' sources at high redshifts, such as Population III stars and their remnants.

  7. On the Detection of Global 21-cm Signal from Reionization Using Interferometers

    NASA Astrophysics Data System (ADS)

    Singh, Saurabh; Subrahmanyan, Ravi; Udaya Shankar, N.; Raghunathan, A.

    2015-12-01

    Detection of the global redshifted 21-cm signal is an excellent means of deciphering the physical processes during the Dark Ages and subsequent Epoch of Reionization (EoR). However, detection of this faint monopole is challenging due to the high precision required in instrumental calibration and modeling of substantially brighter foregrounds and instrumental systematics. In particular, modeling of receiver noise with mK accuracy and its separation remains a formidable task in experiments aiming to detect the global signal using single-element spectral radiometers. Interferometers do not respond to receiver noise; therefore, here we explore the theory of the response of interferometers to global signals. In other words, we discuss the spatial coherence in the electric field arising from the monopole component of the 21-cm signal and methods for its detection using sensor arrays. We proceed by first deriving the response to uniform sky of two-element interferometers made of unit dipole and resonant loop antennas, then extend the analysis to interferometers made of one-dimensional arrays and also consider two-dimensional aperture antennas. Finally, we describe methods by which the coherence might be enhanced so that the interferometer measurements yield improved sensitivity to the monopole component. We conclude (a) that it is indeed possible to measure the global 21-cm from EoR using interferometers, (b) that a practically useful configuration is with omnidirectional antennas as interferometer elements, and (c) that the spatial coherence may be enhanced using, for example, a space beam splitter between the interferometer elements.

  8. 21-cm lensing and the cold spot in the cosmic microwave background.

    PubMed

    Kovetz, Ely D; Kamionkowski, Marc

    2013-04-26

    An extremely large void and a cosmic texture are two possible explanations for the cold spot seen in the cosmic microwave background. We investigate how well these two hypotheses can be tested with weak lensing of 21-cm fluctuations from the epoch of reionization measured with the Square Kilometer Array. While the void explanation for the cold spot can be tested with Square Kilometer Array, given enough observation time, the texture scenario requires significantly prolonged observations, at the highest frequencies that correspond to the epoch of reionization, over the field of view containing the cold spot.

  9. Comparison of 2.8- and 21-cm microwave radiometer observations over soils with emission model calculations

    NASA Technical Reports Server (NTRS)

    Burke, W. J.; Schmugge, T.; Paris, J. F.

    1979-01-01

    An airborne experiment was conducted under NASA auspices to test the feasibility of detecting soil moisture by microwave remote sensing techniques over agricultural fields near Phoenix, Arizona at midday of April 5, 1974 and at dawn of the following day. Extensive ground data were obtained from 96 bare, sixteen hectare fields. Observations made using a scanning (2.8 cm) and a nonscanning (21 cm) radiometer were compared with the predictions of a radiative transfer emission model. It is shown that (1) the emitted intensity at both wavelengths correlates best with the near surface moisture, (2) surface roughness is found to more strongly affect the degree of polarization than the emitted intensity, (3) the slope of the intensity-moisture curves decreases in going from day to dawn, and (4) increased near surface moisture at dawn is characterized by increased polarization of emissions. The results of the experiment indicate that microwave techniques can be used to observe the history of the near surface moisture. The subsurface history must be inferred from soil physics models which use microwave results as boundary conditions.

  10. Limits on variations in fundamental constants from 21-cm and ultraviolet Quasar absorption lines.

    PubMed

    Tzanavaris, P; Webb, J K; Murphy, M T; Flambaum, V V; Curran, S J

    2005-07-22

    Quasar absorption spectra at 21-cm and UV rest wavelengths are used to estimate the time variation of x [triple-bond] alpha(2)g(p)mu, where alpha is the fine structure constant, g(p) the proton g factor, and m(e)/m(p) [triple-bond] mu the electron/proton mass ratio. Over a redshift range 0.24 < or = zeta(abs) < or = 2.04, (Deltax/x)(weighted)(total) = (1.17 +/- 1.01) x 10(-5). A linear fit gives x/x = (-1.43 +/- 1.27) x 10(-15) yr(-1). Two previous results on varying alpha yield the strong limits Deltamu/mu = (2.31 +/- 1.03) x 10(-5) and Deltamu/mu=(1.29 +/- 1.01) x10(-5). Our sample, 8 x larger than any previous, provides the first direct estimate of the intrinsic 21-cm and UV velocity differences 6 km s(-1).

  11. A Low-cost 21 cm Horn-antenna Radio Telescope for Education and Outreach

    NASA Astrophysics Data System (ADS)

    Patel, Nimesh A.; Patel, Rishi N; Kimberk, Robert S; Test, John H; Krolewski, Alex; Ryan, James; Karkare, Kirit S; Kovac, John M; Dame, Thomas M.

    2014-06-01

    Small radio telescopes (1-3m) for observations of the 21 cm hydrogen line are widely used for education and outreach. A pyramidal horn was used by Ewen & Purcell (1951) to first detect the 21cm line at Harvard. Such a horn is simple to design and build, compared to a parabolic antenna which is usually purchased ready-made. Here we present a design of a horn antenna radio telescope that can be built entirely by students, using simple components costing less than $300. The horn has an aperture of 75 cm along the H-plane, 59 cm along the E-plane, and gain of about 20 dB. The receiver system consists of low noise amplifiers, band-pass filters and a software-defined-radio USB receiver that provides digitized samples for spectral processing in a computer. Starting from construction of the horn antenna, and ending with the measurement of the Galactic rotation curve, took about 6 weeks, as part of an undergraduate course at Harvard University. The project can also grow towards building a two-element interferometer for follow-up studies.

  12. Constraints on the neutrino parameters by future cosmological 21 cm line and precise CMB polarization observations

    SciTech Connect

    Oyama, Yoshihiko; Kohri, Kazunori; Hazumi, Masashi E-mail: kohri@post.kek.jp

    2016-02-01

    Observations of the 21 cm line radiation coming from the epoch of reionization have a great capacity to study the cosmological growth of the Universe. Besides, CMB polarization produced by gravitational lensing has a large amount of information about the growth of matter fluctuations at late time. In this paper, we investigate their sensitivities to the impact of neutrino property on the growth of density fluctuations, such as the total neutrino mass, the effective number of neutrino species (extra radiation), and the neutrino mass hierarchy. We show that by combining a precise CMB polarization observation such as Simons Array with a 21 cm line observation such as Square kilometer Array (SKA) phase 1 and a baryon acoustic oscillation observation (Dark Energy Spectroscopic Instrument:DESI) we can measure effects of non-zero neutrino mass on the growth of density fluctuation if the total neutrino mass is larger than 0.1 eV. Additionally, the combinations can strongly improve errors of the bounds on the effective number of neutrino species σ (N{sub ν}) ∼ 0.06−0.09 at 95 % C.L.. Finally, by using SKA phase 2, we can determine the neutrino mass hierarchy at 95 % C.L. if the total neutrino mass is similar to or smaller than 0.1 eV.

  13. Cosmological signatures of tilted isocurvature perturbations: reionization and 21cm fluctuations

    SciTech Connect

    Sekiguchi, Toyokazu; Sugiyama, Naoshi; Tashiro, Hiroyuki; Silk, Joseph E-mail: hiroyuki.tashiro@asu.edu E-mail: naoshi@nagoya-u.jp

    2014-03-01

    We investigate cosmological signatures of uncorrelated isocurvature perturbations whose power spectrum is blue-tilted with spectral index 2∼21cm line fluctuations due to neutral hydrogens in minihalos. Combination of measurements of the reionization optical depth and 21cm line fluctuations will provide complementary probes of a highly blue-tilted isocurvature power spectrum.

  14. 21 cm signal from cosmic dawn - II. Imprints of the light-cone effects

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Datta, Kanan K.; Choudhury, T. Roy

    2015-11-01

    Details of various unknown physical processes during the cosmic dawn and the epoch of reionization can be extracted from observations of the redshifted 21 cm signal. These observations, however, will be affected by the evolution of the signal along the line of sight which is known as the `light-cone effect'. We model this effect by post-processing a dark matter N-body simulation with an 1D radiative transfer code. We find that the effect is much stronger and dramatic in presence of inhomogeneous heating and Ly α coupling compared to the case where these processes are not accounted for. One finds increase (decrease) in the spherically averaged power spectrum up to a factor of 3 (0.6) at large scales (k ˜ 0.05 Mpc- 1) when the light-cone effect is included, though these numbers are highly dependent on the source model. The effect is particularly significant near the peak and dip-like features seen in the power spectrum. The peaks and dips are suppressed and thus the power spectrum can be smoothed out to a large extent if the width of the frequency band used in the experiment is large. We argue that it is important to account for the light-cone effect for any 21-cm signal prediction during cosmic dawn.

  15. Signatures of clumpy dark matter in the global 21 cm background signal

    SciTech Connect

    Cumberbatch, Daniel T.; Lattanzi, Massimiliano; Silk, Joseph

    2010-11-15

    We examine the extent to which the self-annihilation of supersymmetric neutralino dark matter, as well as light dark matter, influences the rate of heating, ionization, and Lyman-{alpha} pumping of interstellar hydrogen and helium and the extent to which this is manifested in the 21 cm global background signal. We fully consider the enhancements to the annihilation rate from dark matter halos and substructures within them. We find that the influence of such structures can result in significant changes in the differential brightness temperature, {delta}T{sub b}. The changes at redshifts z<25 are likely to be undetectable due to the presence of the astrophysical signal; however, in the most favorable cases, deviations in {delta}T{sub b}, relative to its value in the absence of self-annihilating dark matter, of up to {approx_equal}20 mK at z=30 can occur. Thus we conclude that, in order to exclude these models, experiments measuring the global 21 cm signal, such as EDGES and CORE, will need to reduce the systematics at 50 MHz to below 20 mK.

  16. Incidence of H I 21-cm absorption in strong Fe II systems at 0.5 < z < 1.5

    NASA Astrophysics Data System (ADS)

    Dutta, R.; Srianand, R.; Gupta, N.; Joshi, R.; Petitjean, P.; Noterdaeme, P.; Ge, J.; Krogager, J.-K.

    2017-03-01

    We present the results from our search for H I 21-cm absorption in a sample of 16 strong Fe II systems [Wr(Mg II λ2796) ≥ 1.0 Å and Wr(Fe II λ2600) or W_{Fe II} ≥ 1 Å] at 0.5 < z < 1.5 using the Giant Metrewave Radio Telescope and the Green Bank Telescope. We report six new H I 21-cm absorption detections from our sample, which have increased the known number of detections in strong Mg II systems at this redshift range by ∼50 per cent. Combining our measurements with those in the literature, we find that the detection rate of H I 21-cm absorption increases with W_{Fe II}, being four times higher in systems with W_{Fe II} ≥ 1 Å compared to systems with W_{Fe II} < 1 Å. The N(H I) associated with the H I 21-cm absorbers would be ≥2 × 1020 cm-2, assuming a spin temperature of ∼500 K (based on H I 21-cm absorption measurements of damped Lyman α systems at this redshift range) and unit covering factor. We find that H I 21-cm absorption arises on an average in systems with stronger metal absorption. We also find that quasars with H I 21-cm absorption detected towards them have systematically higher E(B - V) values than those which do not. Further, by comparing the velocity widths of H I 21-cm absorption lines detected in absorption- and galaxy-selected samples, we find that they show an increasing trend (significant at 3.8σ) with redshift at z < 3.5, which could imply that the absorption originates from more massive galaxy haloes at high z. Increasing the number of H I 21-cm absorption detections at these redshifts is important to confirm various trends noted here with higher statistical significance.

  17. The 21 cm signature of shock heated and diffuse cosmic string wakes

    SciTech Connect

    Hernández, Oscar F.; Brandenberger, Robert H. E-mail: rhb@physics.mcgill.ca

    2012-07-01

    The analysis of the 21 cm signature of cosmic string wakes is extended in several ways. First we consider the constraints on Gμ from the absorption signal of shock heated wakes laid down much later than matter radiation equality. Secondly we analyze the signal of diffuse wake, that is those wakes in which there is a baryon overdensity but which have not shock heated. Finally we compare the size of these signals to the expected thermal noise per pixel which dominates over the background cosmic gas brightness temperature and find that the cosmic string signal will exceed the thermal noise of an individual pixel in the Square Kilometre Array for string tensions Gμ > 2.5 × 10{sup −8}.

  18. Method for direct measurement of cosmic acceleration by 21-cm absorption systems.

    PubMed

    Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li

    2014-07-25

    So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.

  19. Method for Direct Measurement of Cosmic Acceleration by 21-cm Absorption Systems

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li

    2014-07-01

    So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.

  20. Intensity Based Seismic Hazard Map of Republic of Macedonia

    NASA Astrophysics Data System (ADS)

    Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta

    2016-04-01

    The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the

  1. Using 21 cm absorption surveys to measure the average H I spin temperature in distant galaxies

    NASA Astrophysics Data System (ADS)

    Allison, J. R.; Zwaan, M. A.; Duchesne, S. W.; Curran, S. J.

    2016-10-01

    We present a statistical method for measuring the average H I spin temperature in distant galaxies using the expected detection yields from future wide-field 21 cm absorption surveys. As a demonstrative case study, we consider an all-southern-sky simulated survey of 2-h per pointing with the Australian Square Kilometre Array Pathfinder for intervening H I absorbers at intermediate cosmological redshifts between z = 0.4 and 1. For example, if such a survey yielded 1000 absorbers, we would infer a harmonic-mean spin temperature of overline{T}_spin ˜ 100 K for the population of damped Lyman α absorbers (DLAs) at these redshifts, indicating that more than 50 per cent of the neutral gas in these systems is in a cold neutral medium (CNM). Conversely, a lower yield of only 100 detections would imply overline{T}_spin ˜ 1000 K and a CNM fraction less than 10 per cent. We propose that this method can be used to provide independent verification of the spin temperature evolution reported in recent 21 cm surveys of known DLAs at high redshift and for measuring the spin temperature at intermediate redshifts below z ≈ 1.7, where the Lyman α line is inaccessible using ground-based observatories. Increasingly more sensitive and larger surveys with the Square Kilometre Array should provide stronger statistical constraints on the average spin temperature. However, these will ultimately be limited by the accuracy to which we can determine the H I column density frequency distribution, the covering factor and the redshift distribution of the background radio source population.

  2. First limits on the 21 cm power spectrum during the Epoch of X-ray heating

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, A.; Dillon, Joshua S.; Hewitt, J. N.; Loeb, A.; Mesinger, A.; Neben, A. R.; Offringa, A. R.; Tegmark, M.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hurley-Walker, N.; Johnston-Hollitt, M.; Jacobs, Daniel C.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Thyagarajan, Nithyanandan; Oberoi, D.; Ord, S. M.; Paul, S.; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-08-01

    We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). 3 h of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1-3 h of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of 104 mK on comoving scales k ≲ 0.5 h Mpc-1. This represents the first upper limits on the 21 cm power spectrum fluctuations at redshifts 12 ≲ z ≲ 18 but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.

  3. Probing Individual Sources during Reionization and Cosmic Dawn using Square Kilometre Array HI 21-cm Observations

    NASA Astrophysics Data System (ADS)

    Datta, Kanan K.; Ghara, Raghunath; Majumdar, Suman; Choudhury, T. Roy; Bharadwaj, Somnath; Roy, Himadri; Datta, Abhirup

    2016-12-01

    Detection of individual luminous sources during the reionization epoch and cosmic dawn through their signatures in the HI 21-cm signal is one of the direct approaches to probe the epoch. Here, we summarize our previous works on this and present preliminary results on the prospects of detecting such sources using the SKA1-low experiment. We first discuss the expected HI 21-cm signal around luminous sources at different stages of reionization and cosmic dawn. We then introduce two visibility based estimators for detecting such signals: one based on the matched filtering technique and the other relies on simply combing the visibility signal from different baselines and frequency channels. We find that the SKA1-low should be able to detect ionized bubbles of radius Rb ≳ 10 Mpc with ˜100 h of observations at redshift z˜8 provided that the mean outside neutral hydrogen fraction x_{ {HI}} ≳ 0.5. We also investigate the possibility of detecting HII regions around known bright QSOs such as around ULASJ1120+0641 discovered by Mortlock et al. ( Nature 474, 7353 (2011)). We find that a 5 σ detection is possible with 600 h of SKA1-low observations if the QSO age and the outside xHI are at least ˜2×107 Myr and ˜0.2 respectively. Finally, we investigate the possibility of detecting the very first X-ray and Ly- α sources during the cosmic dawn. We consider mini-QSOs like sources which emits in X-ray frequency band. We find that with a total ˜ 1000 h of observations, SKA1-low should be able to detect those sources individually with a ˜ 9 σ significance at redshift z=15. We summarize how the SNR changes with various parameters related to the source properties.

  4. Simulating the large-scale structure of HI intensity maps

    SciTech Connect

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel E-mail: aseem@iucaa.in E-mail: alexandre.refregier@phys.ethz.ch E-mail: joel.akeret@phys.ethz.ch

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  5. A 21 cm Spectral and Continuum Study of IC 443 Using the Very Large Array and the Arecibo Telescope

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Joon; Koo, Bon-Chul; Yun, Min S.; Stanimirović, Snežana; Heiles, Carl; Heyer, Mark

    2008-03-01

    We report 21 cm spectral-line and continuum observations of the Galactic supernova remnant IC 443 using the Very Large Array (VLA) and the Arecibo telescope. By combining the VLA and Arecibo data, both covering the full extent of IC 443, we have achieved an unprecedented combination of sensitivity and angular resolution, over the continuous range of angular scales from ~40'' to ~1°. Our new radio observations not only reveal previously unknown features of IC 443 but also show the details of the remnant more clearly. The radio morphology of IC 443 consists of two nearly concentric shells. Our 21 cm radio continuum data show that the two shells have distinctly different radial intensity distributions. This morphology supports the scenario whereby the western shell is a breakout portion of the remnant into a rarefied medium. We have developed a dynamical model accounting for the breakout, which provides an estimate for the remnant age of ~2 × 104 yr. The southeastern boundary of the remnant shows interesting features, seen in our observations for the first time: a faint radio continuum halo and numerous "spurs." These features are mainly found in the region where IC 443 overlaps with another remnant, G189.6+3.3. These features most likely originate from the interactions of IC 443 with the surrounding medium. The H I emission associated with IC 443 appears over the velocity range between -100 km s-1 and 50 km s-1. The strongest absorption is seen around v LSR ~ -5 km s-1, which corresponds to the systemic velocity of IC 443. We identify a broad, extended lane of H I gas near the systemic velocity as preshock gas in the southern part of the remnant. Most of the shocked H I gas is located along the southern supernova remnant (SNR) boundary and is blueshifted. We derive an accurate mass of the shocked H I gas using template HCO+ (1-0) spectra, which is 493 ± 56 M ⊙. Our high-resolution H I data enable us to resolve the shocked H I in the northeastern region into a

  6. Methods to Map Cropping Intensity Using MODIS Data (Invited)

    NASA Astrophysics Data System (ADS)

    Jain, M.; Mondal, P.; DeFries, R. S.; Small, C.; Galford, G. L.

    2013-12-01

    The food security of smallholder farmers is vulnerable to climate change and climate variability. Cropping intensity, the number of crops planted annually, can be used as a measure of food security for smallholder farmers given that it can greatly affect net production. Remote sensing tools and techniques offer a unique way to map cropping patterns over large spatial and temporal scales as well as in real time. Yet current techniques for quantifying cropping intensity using remote sensing may not accurately map smallholder farms where the size of one agricultural plot is typically smaller than the spatial resolution of readily available satellite data like MODIS (250 m) and sometimes Landsat (30 m). This presentation presents techniques to map cropping intensity by quantifying the amount of cropped area at a 1 x 1 km scale using MODIS satellite data in study regions in India. Specifically we present two methods to map cropped area, which are validated using higher-resolution Quickbird and Landsat data. The first method uses Landsat data to train MODIS data - while the method has fairly high accuracy (R2 > .80), it is difficult to automate over large spatial and temporal scales. The second method uses only MODIS data to quantify cropped area - this method is easy to automate over large spatial and temporal scales but has slightly reduced accuracy. To illustrate the utility of these methods, we present maps of cropping intensity across several regions in India and show how these data can be related to changes in cropped area through time with contemporaneous climate and irrigation data.

  7. Detector modules and spectrometers for the TIME-Pilot [CII] intensity mapping experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, Jonathon; Bock, James; Bradford, C. Matt; Bumble, Bruce; Chang, Tzu-Ching; Cheng, Yun-Ting; Cooray, Asantha; Crites, Abigail; Hailey-Dunsheath, Steven; Gong, Yan; Li, Chao-Te; O'Brient, Roger; Shirokoff, Erik; Shiu, Corwin; Sun, Jason; Staniszewski, Zachary; Uzgil, Bade; Zemcov, Michael

    2016-07-01

    This proceeding presents the current TIME-Pilot instrument design and status with a focus on the close-packed modular detector arrays and spectrometers. Results of laboratory tests with prototype detectors and spectrometers are discussed. TIME-Pilot is a new mm-wavelength grating spectrometer array under development that will study the Epoch of Reionization (the period of time when the first stars and galaxies ionized the intergalactic medium) by mapping the fluctuations of the redshifted 157:7 μm emission line of singly ionized carbon ([CII]) from redshift z 5:2 to 8:5. As a tracer of star formation, the [CII] power spectrum can provide information on the sources driving reionization and complements 21 cm data (which traces neutral hydrogen in the intergalactic medium). Intensity mapping provides a measure of the mean [CII] intensity without the need to resolve and detect faint sources individually. We plan to target a 1 degree by 0.35 arcminute field on the sky and a spectral range of 199-305 GHz, producing a spatial-spectral slab which is 140 Mpc by 0.9 Mpc on-end and 1230 Mpc in the redshift direction. With careful removal of intermediate-redshift CO sources, we anticipate a detection of the halo-halo clustering term in the [CII] power spectrum consistent with current models for star formation history in 240 hours on the JCMT. TIME-Pilot will use two stacks of 16 parallel-plate waveguide spectrometers (one stack per polarization) with a resolving power R 100 and a spectral range of 183 to 326 GHz. The range is divided into 60 spectral channels, of which 16 at the band edges on each spectrometer serve as atmospheric monitors. The diffraction gratings are curved to produce a compact instrument, each focusing the diffracted light onto an output arc sampled by the 60 bolometers. The bolometers are built in buttable dies of 8 (low freqeuency) or 12 (high frequency) spectral channels by 8 spatial channels and are mated to the spectrometer stacks. Each detector

  8. Tests of the Tully-Fisher relation. 1: Scatter in infrared magnitude versus 21 cm width

    NASA Technical Reports Server (NTRS)

    Bernstein, Gary M.; Guhathakurta, Puragra; Raychaudhury, Somak; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry; Vogt, Nicole P.

    1994-01-01

    We examine the precision of the Tully-Fisher relation (TFR) using a sample of galaxies in the Coma region of the sky, and find that it is good to 5% or better in measuring relative distances. Total magnitudes and disk axis ratios are derived from H and I band surface photometry, and Arecibo 21 cm profiles define the rotation speeds of the galaxies. Using 25 galaxies for which the disk inclination and 21 cm width are well defined, we find an rms deviation of 0.10 mag from a linear TFR with dI/d(log W(sub c)) = -5.6. Each galaxy is assumed to be at a distance proportional to its redshift, and an extinction correction of 1.4(1-b/a) mag is applied to the total I magnitude. The measured scatter is less than 0.15 mag using milder extinction laws from the literature. The I band TFR scatter is consistent with measurement error, and the 95% CL limits on the intrinsic scatter are 0-0.10 mag. The rms scatter using H band magnitudes is 0.20 mag (N = 17). The low width galaxies have scatter in H significantly in excess of known measurement error, but the higher width half of the galaxies have scatter consistent with measurement error. The H band TFR slope may be as steep as the I band slope. As the first applications of this tight correlation, we note the following: (1) the data for the particular spirals commonly used to define the TFR distance to the Coma cluster are inconsistent with being at a common distance and are in fact in free Hubble expansion, with an upper limit of 300 km/s on the rms peculiar line-of-sight velocity of these gas-rich spirals; and (2) the gravitational potential in the disks of these galaxies has typical ellipticity less than 5%. The published data for three nearby spiral galaxies with Cepheid distance determinations are inconsistent with our Coma TFR, suggesting that these local calibrators are either ill-measured or peculiar relative to the Coma Supercluster spirals, or that the TFR has a varying form in different locales.

  9. Radio frequency interference at Jodrell Bank Observatory within the protected 21 cm band

    NASA Technical Reports Server (NTRS)

    Tarter, J.

    1989-01-01

    Radio frequency interference (RFI) will provide one of the most difficult challenges to systematic Searches for Extraterrestrial Intelligence (SETI) at microwave frequencies. The SETI-specific equipment is being optimized for the detection of signals generated by a technology rather than those generated by natural processes in the universe. If this equipment performs as expected, then it will inevitably detect many signals originating from terrestrial technology. If these terrestrial signals are too numerous and/or strong, the equipment will effectively be blinded to the (presumably) weaker extraterrestrial signals being sought. It is very difficult to assess how much of a problem RFI will actually represent to future observations, without employing the equipment and beginning the search. In 1983 a very high resolution spectrometer was placed at the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England. This equipment permitted an investigation of the interference environment at Jodrell Bank, at that epoch, and at frequencies within the 21 cm band. This band was chosen because it has long been "protected" by international agreement; no transmitters should have been operating at those frequencies. The data collected at Jodrell Bank were expected to serve as a "best case" interference scenario and provide the minimum design requirements for SETI equipment that must function in the real and noisy environment. This paper describes the data collection and analysis along with some preliminary conclusions concerning the nature of the interference environment at Jodrell Bank.

  10. Radio frequency interference at Jodrell Bank Observatory within the protected 21 cm band.

    PubMed

    Tarter, J

    1989-01-01

    Radio frequency interference (RFI) will provide one of the most difficult challenges to systematic Searches for Extraterrestrial Intelligence (SETI) at microwave frequencies. The SETI-specific equipment is being optimized for the detection of signals generated by a technology rather than those generated by natural processes in the universe. If this equipment performs as expected, then it will inevitably detect many signals originating from terrestrial technology. If these terrestrial signals are too numerous and/or strong, the equipment will effectively be blinded to the (presumably) weaker extraterrestrial signals being sought. It is very difficult to assess how much of a problem RFI will actually represent to future observations, without employing the equipment and beginning the search. In 1983 a very high resolution spectrometer was placed at the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England. This equipment permitted an investigation of the interference environment at Jodrell Bank, at that epoch, and at frequencies within the 21 cm band. This band was chosen because it has long been "protected" by international agreement; no transmitters should have been operating at those frequencies. The data collected at Jodrell Bank were expected to serve as a "best case" interference scenario and provide the minimum design requirements for SETI equipment that must function in the real and noisy environment. This paper describes the data collection and analysis along with some preliminary conclusions concerning the nature of the interference environment at Jodrell Bank.

  11. Foregrounds for redshifted 21-cm studies of reionization: Giant Meter Wave Radio Telescope 153-MHz observations

    NASA Astrophysics Data System (ADS)

    Ali, Sk. Saiyad; Bharadwaj, Somnath; Chengalur, Jayaram N.

    2008-04-01

    Foreground subtraction is the biggest challenge for future redshifted 21-cm observations to probe reionization. We use a short Giant Meter Wave Radio Telescope (GMRT) observation at 153MHz to characterize the statistical properties of the background radiation across ~1° to subarcmin angular scales, and across a frequency band of 5MHz with 62.5kHz resolution. The statistic we use is the visibility correlation function, or equivalently the angular power spectrum Cl. We present the results obtained from using relatively unsophisticated, conventional data calibration procedures. We find that even fairly simple-minded calibration allows one to estimate the visibility correlation function at a given frequency V2(U, 0). From our observations, we find that V2(U, 0) is consistent with foreground model predictions at all angular scales except the largest ones probed by our observations where the model predictions are somewhat in excess. On the other hand, the visibility correlation between different frequencies κ(U, Δν) seems to be much more sensitive to calibration errors. We find a rapid decline in κ(U, Δν), in contrast with the prediction of less than 1 per cent variation across 2.5MHz. In this case, however, it seems likely that a substantial part of the discrepancy may be due to limitations of data reduction procedures.

  12. Scintillation noise power spectrum and its impact on high-redshift 21-cm observations

    NASA Astrophysics Data System (ADS)

    Vedantham, H. K.; Koopmans, L. V. E.

    2016-05-01

    Visibility scintillation resulting from wave propagation through the turbulent ionosphere can be an important source of noise at low radio frequencies (ν ≲ 200 MHz). Many low-frequency experiments are underway to detect the power spectrum of brightness temperature fluctuations of the neutral-hydrogen 21-cm signal from the Epoch of Reionization (EoR: 12 ≳ z ≳ 7, 100 ≲ ν ≲ 175 MHz). In this paper, we derive scintillation noise power spectra in such experiments while taking into account the effects of typical data processing operations such as self-calibration and Fourier synthesis. We find that for minimally redundant arrays such as LOFAR and MWA, scintillation noise is of the same order of magnitude as thermal noise, has a spectral coherence dictated by stretching of the snapshot uv-coverage with frequency, and thus is confined to the well-known wedge-like structure in the cylindrical (two-dimensional) power spectrum space. Compact, fully redundant (dcore ≲ rF ≈ 300 m at 150 MHz) arrays such as HERA and SKA-LOW (core) will be scintillation noise dominated at all baselines, but the spatial and frequency coherence of this noise will allow it to be removed along with spectrally smooth foregrounds.

  13. Upper Limits on the 21 cm Epoch of Reionization Power Spectrum from One Night with LOFAR

    NASA Astrophysics Data System (ADS)

    Patil, A. H.; Yatawatta, S.; Koopmans, L. V. E.; de Bruyn, A. G.; Brentjens, M. A.; Zaroubi, S.; Asad, K. M. B.; Hatef, M.; Jelić, V.; Mevius, M.; Offringa, A. R.; Pandey, V. N.; Vedantham, H.; Abdalla, F. B.; Brouw, W. N.; Chapman, E.; Ciardi, B.; Gehlot, B. K.; Ghosh, A.; Harker, G.; Iliev, I. T.; Kakiichi, K.; Majumdar, S.; Mellema, G.; Silva, M. B.; Schaye, J.; Vrbanec, D.; Wijnholds, S. J.

    2017-03-01

    We present the first limits on the Epoch of Reionization 21 cm H i power spectra, in the redshift range z = 7.9–10.6, using the Low-Frequency Array (LOFAR) High-Band Antenna (HBA). In total, 13.0 hr of data were used from observations centered on the North Celestial Pole. After subtraction of the sky model and the noise bias, we detect a non-zero {{{Δ }}}{{I}}2={(56+/- 13{mK})}2 (1-σ) excess variance and a best 2-σ upper limit of {{{Δ }}}212< {(79.6{mK})}2 at k = 0.053 h cMpc‑1 in the range z = 9.6–10.6. The excess variance decreases when optimizing the smoothness of the direction- and frequency-dependent gain calibration, and with increasing the completeness of the sky model. It is likely caused by (i) residual side-lobe noise on calibration baselines, (ii) leverage due to nonlinear effects, (iii) noise and ionosphere-induced gain errors, or a combination thereof. Further analyses of the excess variance will be discussed in forthcoming publications.

  14. Eliminating Polarized Leakage as a Systematic for 21 cm Epoch of Reionization Experiments

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; HERA Collaboration, PAPER Collaboration

    2016-01-01

    Because of the extreme brightness of foreground emission relative to the desired signal, experiments seeking the 21 cm HI signal from the epoch of reionization must employ foreground removal or avoidance strategies with high dynamic range. Almost all of these techniques rely on the spectral smoothness of the foreground emission, which is dominated by synchrotron emission. The polarized component of such emission can suffer Faraday rotation through the interstellar medium of the Milky Way, thereby inducing frequency structure which can be mistaken for real reionization signal. Therefore, it is of great importance for such experiments to eliminate leakage of Faraday-rotated, polarized emission into the unpolarized (Stokes I) component where the reionization signal lives. We discuss a number of approaches under investigation for mitigating this leakage in the PAPER and HERA experiments, including calibration and careful instrument design. Importantly, however, we show that the ionosphere may provide a very strong suppression of the polarized signal, when averaged over the integration times required for EoR experiments, by scrambling the phase of polarized sources. Moreover, this attenuation comes with very little suppression of the desired unpolarized signal. We consider the implications of this strategy for PAPER and HERA.

  15. Confirmation of Wide-field Signatures in Redshifted 21 cm Power Spectra

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Jacobs, Daniel C.; Bowman, Judd D.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Deshpande, A. A.; de Oliveira-Costa, A.; Dillon, Joshua S.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hernquist, L.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kim, Han-Seek; Kittiwisit, P.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, Sourabh; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Udaya Shankar, N.; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tegmark, M.; Tingay, S. J.; Trott, C. M.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.; Wyithe, J. S. B.

    2015-07-01

    We confirm our recent prediction of the “pitchfork” foreground signature in power spectra of high-redshift 21 cm measurements where the interferometer is sensitive to large-scale structure on all baselines. This is due to the inherent response of a wide-field instrument and is characterized by enhanced power from foreground emission in Fourier modes adjacent to those considered to be the most sensitive to the cosmological H i signal. In our recent paper, many signatures from the simulation that predicted this feature were validated against Murchison Widefield Array (MWA) data, but this key pitchfork signature was close to the noise level. In this paper, we improve the data sensitivity through the coherent averaging of 12 independent snapshots with identical instrument settings and provide the first confirmation of the prediction with a signal-to-noise ratio \\gt 10. This wide-field effect can be mitigated by careful antenna designs that suppress sensitivity near the horizon. Simple models for antenna apertures that have been proposed for future instruments such as the Hydrogen Epoch of Reionization Array and the Square Kilometre Array indicate they should suppress foreground leakage from the pitchfork by ∼40 dB relative to the MWA and significantly increase the likelihood of cosmological signal detection in these critical Fourier modes in the three-dimensional power spectrum.

  16. Constraining high-redshift X-ray sources with next generation 21-cm power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Mesinger, Andrei; Dillon, Joshua S.; Liu, Adrian; Pober, Jonathan

    2016-05-01

    We use the Fisher matrix formalism and seminumerical simulations to derive quantitative predictions of the constraints that power spectrum measurements on next-generation interferometers, such as the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometre Array (SKA), will place on the characteristics of the X-ray sources that heated the high-redshift intergalactic medium. Incorporating observations between z = 5 and 25, we find that the proposed 331 element HERA and SKA phase 1 will be capable of placing ≲ 10 per cent constraints on the spectral properties of these first X-ray sources, even if one is unable to perform measurements within the foreground contaminated `wedge' or the FM band. When accounting for the enhancement in power spectrum amplitude from spin temperature fluctuations, we find that the observable signatures of reionization extend well beyond the peak in the power spectrum usually associated with it. We also find that lower redshift degeneracies between the signatures of heating and reionization physics lead to errors on reionization parameters that are significantly greater than previously predicted. Observations over the heating epoch are able to break these degeneracies and improve our constraints considerably. For these two reasons, 21-cm observations during the heating epoch significantly enhance our understanding of reionization as well.

  17. The visibility-based tapered gridded estimator (TGE) for the redshifted 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Choudhuri, Samir; Bharadwaj, Somnath; Chatterjee, Suman; Ali, Sk. Saiyad; Roy, Nirupam; Ghosh, Abhik

    2016-12-01

    We present an improved visibility-based tapered gridded estimator (TGE) for the power spectrum of the diffuse sky signal. The visibilities are gridded to reduce the total computation time for the calculation, and tapered through a convolution to suppress the contribution from the outer regions of the telescope's field of view. The TGE also internally estimates the noise bias, and subtracts this out to give an unbiased estimate of the power spectrum. An earlier version of the 2D TGE for the angular power spectrum Cℓ is improved and then extended to obtain the 3D TGE for the power spectrum P(k) of the 21-cm brightness temperature fluctuations. Analytic formulas are also presented for predicting the variance of the binned power spectrum. The estimator and its variance predictions are validated using simulations of 150-MHz Giant Metrewave Radio Telescope (GMRT) observations. We find that the estimator accurately recovers the input model for the 1D spherical power spectrum P(k) and the 2D cylindrical power spectrum P(k⊥, k∥), and that the predicted variance is in reasonably good agreement with the simulations.

  18. Improved foreground removal in GMRT 610 MHz observations towards redshifted 21-cm tomography

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhik; Bharadwaj, Somnath; Ali, Sk. Saiyad; Chengalur, Jayaram N.

    2011-12-01

    Foreground removal is a challenge for 21-cm tomography of the high-redshift Universe. We use archival Giant Metrewave Radio Telescope (GMRT) data (obtained for completely different astronomical goals) to estimate the foregrounds at a redshift of ˜1. The statistic we use is the cross power spectrum between two frequencies separated by Δν at the angular multipole ℓ, or equivalently the multi-frequency angular power spectrum Cℓ(Δν). An earlier measurement of Cℓ(Δν) using these data had revealed the presence of oscillatory patterns along Δν, which turned out to be a severe impediment for foreground removal. Using the same data, in this paper we show that it is possible to considerably reduce these oscillations by suppressing the sidelobe response of the primary antenna elements. The suppression works best at the angular multipoles ℓ for which there is a dense sampling of the u-v plane. For three angular multipoles ℓ= 1405, 1602 and 1876, this sidelobe suppression along with a low order polynomial fitting completely results in residuals of (≤ 0.02 mK2), consistent with the noise at the 3σ level. Since the polynomial fitting is done after estimation of the power spectrum it can be ensured that the estimation of the H I signal is not biased. The corresponding 99 per cent upper limit on the H I signal is ?, where ? is the mean neutral fraction and b is the bias.

  19. EXPLORING THE COSMIC REIONIZATION EPOCH IN FREQUENCY SPACE: AN IMPROVED APPROACH TO REMOVE THE FOREGROUND IN 21 cm TOMOGRAPHY

    SciTech Connect

    Wang, Jingying; Xu, Haiguang; Guo, Xueying; Li, Weitian; Liu, Chengze; An, Tao; Wang, Yu; Gu, Junhua; Martineau-Huynh, Olivier; Wu, Xiang-Ping E-mail: zishi@sjtu.edu.cn

    2013-02-15

    With the intent of correctly restoring the redshifted 21 cm signals emitted by neutral hydrogen during the cosmic reionization processes, we re-examine the separation approaches based on the quadratic polynomial fitting technique in frequency space in order to investigate whether they work satisfactorily with complex foreground by quantitatively evaluating the quality of restored 21 cm signals in terms of sample statistics. We construct the foreground model to characterize both spatial and spectral substructures of the real sky, and use it to simulate the observed radio spectra. By comparing between different separation approaches through statistical analysis of restored 21 cm spectra and corresponding power spectra, as well as their constraints on the mean halo bias b and average ionization fraction x{sub e} of the reionization processes, at z = 8 and the noise level of 60 mK we find that although the complex foreground can be well approximated with quadratic polynomial expansion, a significant part of the Mpc-scale components of the 21 cm signals (75% for {approx}> 6 h {sup -1} Mpc scales and 34% for {approx}> 1 h {sup -1} Mpc scales) is lost because it tends to be misidentified as part of the foreground when the single-narrow-segment separation approach is applied. The best restoration of the 21 cm signals and the tightest determination of b and x{sub e} can be obtained with the three-narrow-segment fitting technique as proposed in this paper. Similar results can be obtained at other redshifts.

  20. Empirical covariance modeling for 21 cm power spectrum estimation: A method demonstration and new limits from early Murchison Widefield Array 128-tile data

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Neben, Abraham R.; Hewitt, Jacqueline N.; Tegmark, Max; Barry, N.; Beardsley, A. P.; Bowman, J. D.; Briggs, F.; Carroll, P.; de Oliveira-Costa, A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hernquist, L.; Hurley-Walker, N.; Jacobs, D. C.; Kim, H. S.; Kittiwisit, P.; Lenc, E.; Line, J.; Loeb, A.; McKinley, B.; Mitchell, D. A.; Morales, M. F.; Offringa, A. R.; Paul, S.; Pindor, B.; Pober, J. C.; Procopio, P.; Riding, J.; Sethi, S.; Shankar, N. Udaya; Subrahmanyan, R.; Sullivan, I.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C.; Wayth, R. B.; Webster, R. L.; Wyithe, S.; Bernardi, G.; Cappallo, R. J.; Deshpande, A. A.; Johnston-Hollitt, M.; Kaplan, D. L.; Lonsdale, C. J.; McWhirter, S. R.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Srivani, K. S.; Williams, A.; Williams, C. L.

    2015-06-01

    The separation of the faint cosmological background signal from bright astrophysical foregrounds remains one of the most daunting challenges of mapping the high-redshift intergalactic medium with the redshifted 21 cm line of neutral hydrogen. Advances in mapping and modeling of diffuse and point source foregrounds have improved subtraction accuracy, but no subtraction scheme is perfect. Precisely quantifying the errors and error correlations due to missubtracted foregrounds allows for both the rigorous analysis of the 21 cm power spectrum and for the maximal isolation of the "EoR window" from foreground contamination. We present a method to infer the covariance of foreground residuals from the data itself in contrast to previous attempts at a priori modeling. We demonstrate our method by setting limits on the power spectrum using a 3 h integration from the 128-tile Murchison Widefield Array. Observing between 167 and 198 MHz, we find at 95% confidence a best limit of Δ2(k )<3.7 ×104 mK2 at comoving scale k =0.18 h Mpc-1 and at z =6.8 , consistent with existing limits.

  1. Coaxing cosmic 21 cm fluctuations from the polarized sky using m -mode analysis

    NASA Astrophysics Data System (ADS)

    Shaw, J. Richard; Sigurdson, Kris; Sitwell, Michael; Stebbins, Albert; Pen, Ue-Li

    2015-04-01

    In this paper we continue to develop the m -mode formalism, a technique for efficient and optimal analysis of wide-field transit radio telescopes, targeted at 21 cm cosmology. We extend this formalism to give an accurate treatment of the polarized sky, fully accounting for the effects of polarization leakage and cross polarization. We use the geometry of the measured set of visibilities to project down to pure temperature modes on the sky, serving as a significant compression, and an effective first filter of polarized contaminants. As in our previous work, we use the m -mode formalism with the Karhunen-Loève transform to give a highly efficient method for foreground cleaning, and demonstrate its success in cleaning realistic polarized skies observed with an instrument suffering from substantial off axis polarization leakage. We develop an optimal quadratic estimator in the m -mode formalism which can be efficiently calculated using a Monte Carlo technique. This is used to assess the implications of foreground removal for power spectrum constraints where we find that our method can clean foregrounds well below the foreground wedge, rendering only scales k∥<0.02 h Mpc-1 inaccessible. As this approach assumes perfect knowledge of the telescope, we perform a conservative test of how essential this is by simulating and analyzing data sets with deviations about our assumed telescope. Assuming no other techniques to mitigate bias are applied, we find we recover unbiased power spectra when the per-feed beamwidth to be measured to 0.1%, and amplifier gains to be known to 1% within each minute. Finally, as an example application, we extend our forecasts to a wideband 400-800 MHz cosmological observation and consider the implications for probing dark energy, finding a pathfinder-scale medium-sized cylinder telescope improves the Dark Energy Task Force figure of merit by around 70% over Planck and Stage II experiments alone.

  2. New Evidence for Mass Loss from δ Cephei from H I 21 cm Line Observations

    NASA Astrophysics Data System (ADS)

    Matthews, L. D.; Marengo, M.; Evans, N. R.; Bono, G.

    2012-01-01

    Recently published Spitzer Space Telescope observations of the classical Cepheid archetype δ Cephei revealed an extended dusty nebula surrounding this star and its hot companion HD 213307. At far-infrared wavelengths, the emission resembles a bow shock aligned with the direction of space motion of the star, indicating that δ Cephei is undergoing mass loss through a stellar wind. Here we report H I 21 cm line observations with the Very Large Array (VLA) to search for neutral atomic hydrogen associated with this wind. Our VLA data reveal a spatially extended H I nebula (~13' or 1 pc across) surrounding the position of δ Cephei. The nebula has a head-tail morphology, consistent with circumstellar ejecta shaped by the interaction between a stellar wind and the interstellar medium (ISM). We directly measure a mass of circumstellar atomic hydrogen M_H I ≈ 0.07 M_{⊙}, although the total H I mass may be larger, depending on the fraction of circumstellar material that is hidden by Galactic contamination within our band or that is present on angular scales too large to be detected by the VLA. It appears that the bulk of the circumstellar gas has originated directly from the star, although it may be augmented by material swept from the surrounding ISM. The H I data are consistent with a stellar wind with an outflow velocity V o = 35.6 ± 1.2 km s-1 and a mass-loss rate of {\\dot{M}}≈ (1.0+/- 0.8)× 10^{-6} M_{⊙} yr-1. We have computed theoretical evolutionary tracks that include mass loss across the instability strip and show that a mass-loss rate of this magnitude, sustained over the preceding Cepheid lifetime of δ Cephei, could be sufficient to resolve a significant fraction of the discrepancy between the pulsation and evolutionary masses for this star.

  3. A Practical Theorem on Using Interferometry to Measure the Global 21-cm Signal

    NASA Astrophysics Data System (ADS)

    Venumadhav, Tejaswi; Chang, Tzu-Ching; Doré, Olivier; Hirata, Christopher M.

    2016-08-01

    The sky-averaged, or global, background of redshifted 21 cm radiation is expected to be a rich source of information on cosmological reheating and reionization. However, measuring the signal is technically challenging: one must extract a small, frequency-dependent signal from under much brighter spectrally smooth foregrounds. Traditional approaches to study the global signal have used single antennas, which require one to calibrate out the frequency-dependent structure in the overall system gain (due to internal reflections, for example) as well as remove the noise bias from auto-correlating a single amplifier output. This has motivated proposals to measure the signal using cross-correlations in interferometric setups, where additional calibration techniques are available. In this paper we focus on the general principles driving the sensitivity of the interferometric setups to the global signal. We prove that this sensitivity is directly related to two characteristics of the setup: the cross-talk between readout channels (i.e., the signal picked up at one antenna when the other one is driven) and the correlated noise due to thermal fluctuations of lossy elements (e.g., absorbers or the ground) radiating into both channels. Thus in an interferometric setup, one cannot suppress cross-talk and correlated thermal noise without reducing sensitivity to the global signal by the same factor—instead, the challenge is to characterize these effects and their frequency dependence. We illustrate our general theorem by explicit calculations within toy setups consisting of two short-dipole antennas in free space and above a perfectly reflecting ground surface, as well as two well-separated identical lossless antennas arranged to achieve zero cross-talk.

  4. A Flux Scale for Southern Hemisphere 21 cm Epoch of Reionization Experiments

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki; Bowman, Judd; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Klima, Pat; MacMahon, Dave H. E.; Manley, Jason R.; Moore, David F.; Pober, Jonathan C.; Stefan, Irina I.; Walbrugh, William P.

    2013-10-01

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from -46° to -40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of -0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  5. Spectral Line De-confusion in an Intensity Mapping Survey

    NASA Astrophysics Data System (ADS)

    Cheng, Yun-Ting; Chang, Tzu-Ching; Bock, James; Bradford, C. Matt; Cooray, Asantha

    2016-12-01

    Spectral line intensity mapping (LIM) has been proposed as a promising tool to efficiently probe the cosmic reionization and the large-scale structure. Without detecting individual sources, LIM makes use of all available photons and measures the integrated light in the source confusion limit to efficiently map the three-dimensional matter distribution on large scales as traced by a given emission line. One particular challenge is the separation of desired signals from astrophysical continuum foregrounds and line interlopers. Here we present a technique to extract large-scale structure information traced by emission lines from different redshifts, embedded in a three-dimensional intensity mapping data cube. The line redshifts are distinguished by the anisotropic shape of the power spectra when projected onto a common coordinate frame. We consider the case where high-redshift [C ii] lines are confused with multiple low-redshift CO rotational lines. We present a semi-analytic model for [C ii] and CO line estimates based on the cosmic infrared background measurements, and show that with a modest instrumental noise level and survey geometry, the large-scale [C ii] and CO power spectrum amplitudes can be successfully extracted from a confusion-limited data set, without external information. We discuss the implications and limits of this technique for possible LIM experiments.

  6. Models of the cosmological 21 cm signal from the epoch of reionization calibrated with Ly α and CMB data

    NASA Astrophysics Data System (ADS)

    Kulkarni, Girish; Choudhury, Tirthankar Roy; Puchwein, Ewald; Haehnelt, Martin G.

    2016-12-01

    We present here 21 cm predictions from high dynamic range simulations for a range of reionization histories that have been tested against available Ly α and cosmic microwave background (CMB) data. We assess the observability of the predicted spatial 21 cm fluctuations by ongoing and upcoming experiments in the late stages of reionization in the limit in which the hydrogen spin temperature is significantly larger than the CMB temperature. Models consistent with the available Ly α data and CMB measurement of the Thomson optical depth predict typical values of 10-20 mK2 for the variance of the 21 cm brightness temperature at redshifts z = 7-10 at scales accessible to ongoing and upcoming experiments (k ≲ 1 cMpc-1h). This is within a factor of a few magnitude of the sensitivity claimed to have been already reached by ongoing experiments in the signal rms value. Our different models for the reionization history make markedly different predictions for the redshift evolution and thus frequency dependence of the 21 cm power spectrum and should be easily discernible by Low-Frequency Array (and later Hydrogen Epoch of Reionization Array and Square Kilometre Array1) at their design sensitivity. Our simulations have sufficient resolution to assess the effect of high-density Lyman limit systems that can self-shield against ionizing radiation and stay 21 cm bright even if the hydrogen in their surroundings is highly ionized. Our simulations predict that including the effect of the self-shielded gas in highly ionized regions reduces the large-scale 21 cm power by about 30 per cent.

  7. The Evolution Of 21 cm Structure (EOS): public, large-scale simulations of Cosmic Dawn and reionization

    NASA Astrophysics Data System (ADS)

    Mesinger, Andrei; Greig, Bradley; Sobacchi, Emanuele

    2016-07-01

    We introduce the Evolution Of 21 cm Structure (EOS) project: providing periodic, public releases of the latest cosmological 21 cm simulations. 21 cm interferometry is set to revolutionize studies of the Cosmic Dawn (CD) and Epoch of Reionization (EoR). Progress will depend on sophisticated data analysis pipelines, initially tested on large-scale mock observations. Here we present the 2016 EOS release: 10243, 1.6 Gpc, 21 cm simulations of the CD and EoR, calibrated to the Planck 2015 measurements. We include calibrated, sub-grid prescriptions for inhomogeneous recombinations and photoheating suppression of star formation in small-mass galaxies. Leaving the efficiency of supernovae feedback as a free parameter, we present two runs which bracket the contribution from faint unseen galaxies. From these two extremes, we predict that the duration of reionization (defined as a change in the mean neutral fraction from 0.9 to 0.1) should be between 2.7 ≲ Δzre ≲ 5.7. The large-scale 21 cm power during the advanced EoR stages can be different by up to a factor of ˜10, depending on the model. This difference has a comparable contribution from (i) the typical bias of sources and (ii) a more efficient negative feedback in models with an extended EoR driven by faint galaxies. We also present detectability forecasts. With a 1000 h integration, Hydrogen Epoch of Reionization Array and (Square Kilometre Array phase 1) SKA1 should achieve a signal-to-noise of ˜few to hundreds throughout the EoR/CD. We caution that our ability to clean foregrounds determines the relative performance of narrow/deep versus wide/shallow surveys expected with SKA1. Our 21-cm power spectra, simulation outputs and visualizations are publicly available.

  8. SENSITIVE 21 cm OBSERVATIONS OF NEUTRAL HYDROGEN IN THE LOCAL GROUP NEAR M31

    SciTech Connect

    Wolfe, Spencer A.; Pisano, D. J.; Lockman, Felix J. E-mail: DJPisano@mail.wvu.edu

    2016-01-10

    Very sensitive 21 cm H i measurements have been made at several locations around the Local Group galaxy M31 using the Green Bank Telescope at an angular resolution of 9.′1, with a 5σ detection level of N{sub H} {sub i} = 3.9 × 10{sup 17} cm{sup −2} for a 30 km s{sup −1} line. Most of the H i in a 12 square-degree area almost equidistant between M31 and M33 is contained in nine discrete clouds that have a typical size of a few kpc and a H i mass of 10{sup 5}M{sub ⊙}. Their velocities in the Local Group Standard of Rest lie between −100 and +40 km s{sup −1}, comparable to the systemic velocities of M31 and M33. The clouds appear to be isolated kinematically and spatially from each other. The total H i mass of all nine clouds is 1.4 × 10{sup 6}M{sub ⊙} for an adopted distance of 800 kpc, with perhaps another 0.2 × 10{sup 6}M{sub ⊙} in smaller clouds or more diffuse emission. The H i mass of each cloud is typically three orders of magnitude less than the dynamical (virial) mass needed to bind the cloud gravitationally. Although they have the size and H i mass of dwarf galaxies, the clouds are unlikely to be part of the satellite system of the Local Group, as they lack stars. To the north of M31, sensitive H i measurements on a coarse grid find emission that may be associated with an extension of the M31 high-velocity cloud (HVC) population to projected distances of ∼100 kpc. An extension of the M31 HVC population at a similar distance to the southeast, toward M33, is not observed.

  9. Light-cone anisotropy in the 21 cm signal from the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Zawada, Karolina; Semelin, Benoît; Vonlanthen, Patrick; Baek, Sunghye; Revaz, Yves

    2014-04-01

    Using a suite of detailed numerical simulations, we estimate the level of anisotropy generated by the time evolution along the light cone of the 21 cm signal from the epoch of reionization. Our simulations include the physics necessary to model the signal during both the late emission regime and the early absorption regime, namely X-ray and Lyman band 3D radiative transfer in addition to the usual dynamics and ionizing UV transfer. The signal is analysed using correlation functions perpendicular and parallel to the line of sight. We reproduce general findings from previous theoretical studies: the overall amplitude of the correlations and the fact that the light-cone anisotropy is visible only on large scales (100 comoving Mpc). However, the detailed behaviour is different. We find that, at three different epochs, the amplitudes of the correlations along and perpendicular to the line of sight differ from each other, indicating anisotropy. We show that these three epochs are associated with three events of the global reionization history: the overlap of ionized bubbles, the onset of mild heating by X-rays in regions around the sources, and the onset of efficient Lyman α coupling in regions around the sources. We find that a 20 × 20 deg2 survey area may be necessary to mitigate sample variance when we use the directional correlation functions. On a 100 Mpc (comoving) scale, we show that the light-cone anisotropy dominates over the anisotropy generated by peculiar velocity gradients computed in the linear regime. By modelling instrumental noise and limited resolution, we find that the anisotropy should be easily detectable by the Square Kilometre Array, assuming perfect foreground removal, the limiting factor being a large enough survey size. In the case of the Low-Frequency Array for radio astronomy, it is likely that only one anisotropy episode (ionized bubble overlap) will fall in the observing frequency range. This episode will be detectable only if sample

  10. A FLUX SCALE FOR SOUTHERN HEMISPHERE 21 cm EPOCH OF REIONIZATION EXPERIMENTS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.; Ali, Zaki; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, Dave H. E.; Gugliucci, Nicole E.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2013-10-20

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  11. Intensity Mapping of Molecular Gas at High Redshift

    NASA Astrophysics Data System (ADS)

    Bower, Geoffrey; Keating, Garrett; Marrone, Dan; DeBoer, David; Chang, Tzu-Ching; Chen, Ming-Tang; Jiang, Homin; Koch, Patrick; Kubo, Derek; Li, Chao-Te; Lin, K. Y.; Srinivasan, Ranjani; Darling, Jeremy

    2015-08-01

    The origin and evolution of structure in the Universe is one of the major challenges of observational astronomy. How and when did the first stars and galaxies form? How does baryonic structure trace the underlying dark matter? A multi-wavelength, multi-tool approach is necessary to provide the complete story or the evolution of structure in the Universe. Intensity mapping, which relies on the ability to detect many objects at once through their integrated emission rather than direct detection of individual objects, is a critical part of this mosaic. Intensity mapping provides a window on lower luminosity objects that cannot be detected individually but that collectively drive important processes. In particular, our understanding of the molecular gas component of massive galaxies is being revolutionized by ALMA and EVLA but the population of smaller, star-forming galaxies, which provide the bulk of star formation cannot be individually probed by these instruments.In this talk, I will summarize two intensity mapping experiments to detect molecular gas through the carbon monoxide (CO) rotational transition. We are currently completing sensitive observations with the Sunyaev-Zel'dovic Array (SZA) telescope at a wavelength of 1 cm that are sensitive to emission at redshifts 2.3 to 3.3. The SZA experiments sets strong limits on models for the CO emission and demonstrates the ability to reject foregrounds and telescope systematics in very deep integrations. I also describe the development of an intensity mapping capability for the Y.T. Lee Array, a 13-element interferometer located on Mauna Loa. In its first phase, this project focuses on detection of CO at redshifts 2.3 - 3.3 with detection via power spectrum and cross-correlation with other surveys. The project includes a major technical upgrade, a new digital correlator and IF electronics component to be deployed in 2015/2016. The Y.T. Lee Array observations will be more sensitive and extend to larger angular scales

  12. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  13. On Removing Interloper Contamination from Intensity Mapping Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Taylor, Jessie

    2016-07-01

    Line intensity mapping experiments seek to trace large-scale structures by measuring the spatial fluctuations in the combined emission, in some convenient spectral line, from individually unresolved galaxies. An important systematic concern for these surveys is line confusion from foreground or background galaxies emitting in other lines that happen to lie at the same observed frequency as the “target” emission line of interest. We develop an approach to separate this “interloper” emission at the power spectrum level. If one adopts the redshift of the target emission line in mapping from observed frequency and angle on the sky to co-moving units, the interloper emission is mapped to the wrong co-moving coordinates. Because the mapping is different in the line of sight and transverse directions, the interloper contribution to the power spectrum becomes anisotropic, especially if the interloper and target emission are at widely separated redshifts. This distortion is analogous to the Alcock-Paczynski test, but here the warping arises from assuming the wrong redshift rather than an incorrect cosmological model. We apply this to the case of a hypothetical [C ii] emission survey at z˜ 7 and find that the distinctive interloper anisotropy can, in principle, be used to separate strong foreground CO emission fluctuations. In our models, however, a significantly more sensitive instrument than currently planned is required, although there are large uncertainties in forecasting the high-redshift [C ii] emission signal. With upcoming surveys, it may nevertheless be useful to apply this approach after first masking pixels suspected of containing strong interloper contamination.

  14. Hydrogen and the First Stars: First Results from the SCI-HI 21-cm all-sky spectrum experiment

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; Peterson, Jeffrey; Lopez-Cruz, Omar; Jauregui-Garcia, Jose-Miguel; SCI-HI Experiment Team

    2015-01-01

    The 'Sonda Cosmologica de las Islas para la Deteccion de Hidrogeno Neutro' (SCI-HI) experiment is an all-sky 21-cm brightness temperature spectrum experiment studying the cosmic dawn (z~15-35). The experiment is a collaboration between Carnegie Mellon University (CMU) and Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) in Mexico. Initial deployment of the SCI-HI experiment occurred in June 2013 on Guadalupe; a small island about 250 km off of the Pacific coast of Baja California in Mexico. Preliminary measurements from this deployment have placed the first observational constraints on the 21-cm all-sky spectrum around 70 MHz (z~20), see Voytek et al (2014).Neutral Hydrogen (HI) is found throughout the universe in the cold gas that makes up the intergalactic medium (IGM). HI can be observed through the spectral line at 21 cm (1.4 GHz) due to hyperfine structure. Expansion of the universe causes the wavelength of this spectral line to stretch at a rate defined by the redshift z, leading to a signal which can be followed through time.Now the strength of the 21-cm signal in the IGM is dependent only on a small number of variables; the temperature and density of the IGM, the amount of HI in the IGM, the UV energy density in the IGM, and the redshift. This means that 21-cm measurements teach us about the history and structure of the IGM. The SCI-HI experiment focuses on the spatially averaged 21-cm spectrum, looking at the temporal evolution of the IGM during the cosmic dawn before reionization.Although the SCI-HI experiment placed first constraints with preliminary data, this data was limited to a narrow frequency regime around 60-85 MHz. This limitation was caused by instrumental difficulties and the presence of residual radio frequency interference (RFI) in the FM radio band (~88-108 MHz). The SCI-HI experiment is currently undergoing improvements and we plan to have another deployment soon. This deployment would be to Socorro and Clarion, two

  15. Can 21-cm observations discriminate between high-mass and low-mass galaxies as reionization sources?

    NASA Astrophysics Data System (ADS)

    Iliev, Ilian T.; Mellema, Garrelt; Shapiro, Paul R.; Pen, Ue-Li; Mao, Yi; Koda, Jun; Ahn, Kyungjin

    2012-07-01

    The prospect of detecting the first galaxies by observing their impact on the intergalactic medium (IGM) as they reionized it during the first billion years leads us to ask whether such indirect observations are capable of diagnosing which types of galaxies were most responsible for reionization. We attempt to answer this with new large-scale radiative transfer simulations of reionization including the entire mass range of atomically cooling haloes (M > 108 M⊙). We divide these haloes into two groups, high-mass, atomically cooling haloes, or HMACHs (M > 109 M⊙), and low-mass, atomically cooling haloes, or LMACHs (108 < M < 109 M⊙), the latter being susceptible to negative feedback due to Jeans mass filtering in ionized regions, which leads to a process we refer to as self-regulation. We focus here on predictions of the redshifted 21-cm emission, to see if upcoming observations are capable of distinguishing a universe ionized primarily by HMACHs from one in which both HMACHs and LMACHs are responsible, and to see how these results depend upon the uncertain source efficiencies. We find that 21-cm fluctuation power spectra observed by the first-generation Epoch of Reionization 21-cm radio interferometer arrays should be able to distinguish the case of reionization by HMACHs alone from that by both HMACHs and LMACHs, together. Some reionization scenarios, e.g. one with abundant low-efficiency sources versus one with self-regulation, yield very similar power spectra and rms evolution and thus can only be discriminated by their different mean reionization history and 21-cm probability distribution function (PDF) distributions. We also find that the skewness of the 21-cm PDF distribution smoothed with Low Frequency Array (LOFAR)-like resolution shows a clear feature correlated with the rise of the rms due to patchiness. This is independent of the reionization scenario and thus provides a new approach for detecting the rise of large-scale patchiness. The peak epoch

  16. Factor analysis as a tool for spectral line component separation 21cm emission in the direction of L1780

    NASA Technical Reports Server (NTRS)

    Toth, L. V.; Mattila, K.; Haikala, L.; Balazs, L. G.

    1992-01-01

    The spectra of the 21cm HI radiation from the direction of L1780, a small high-galactic latitude dark/molecular cloud, were analyzed by multivariate methods. Factor analysis was performed on HI (21cm) spectra in order to separate the different components responsible for the spectral features. The rotated, orthogonal factors explain the spectra as a sum of radiation from the background (an extended HI emission layer), and from the L1780 dark cloud. The coefficients of the cloud-indicator factors were used to locate the HI 'halo' of the molecular cloud. Our statistically derived 'background' and 'cloud' spectral profiles, as well as the spatial distribution of the HI halo emission distribution were compared to the results of a previous study which used conventional methods analyzing nearly the same data set.

  17. GIANT METREWAVE RADIO TELESCOPE DETECTION OF TWO NEW H I 21 cm ABSORBERS AT z ≈ 2

    SciTech Connect

    Kanekar, N.

    2014-12-20

    I report the detection of H I 21 cm absorption in two high column density damped Lyα absorbers (DLAs) at z ≈ 2 using new wide-band 250-500 MHz receivers on board the Giant Metrewave Radio Telescope. The integrated H I 21 cm optical depths are 0.85 ± 0.16 km s{sup –1} (TXS1755+578) and 2.95 ± 0.15 km s{sup –1} (TXS1850+402). For the z = 1.9698 DLA toward TXS1755+578, the difference in H I 21 cm and C I profiles and the weakness of the radio core suggest that the H I 21cm absorption arises toward radio components in the jet, and that the optical and radio sightlines are not the same. This precludes an estimate of the DLA spin temperature. For the z = 1.9888 DLA toward TXS1850+402, the absorber covering factor is likely to be close to unity, as the background source is extremely compact, with the entire 5 GHz emission arising from a region of ≤ 1.4 mas in size. This yields a DLA spin temperature of T{sub s} = (372 ± 18) × (f/1.0) K, lower than typical T{sub s} values in high-z DLAs. This low spin temperature and the relatively high metallicity of the z = 1.9888 DLA ([Zn/H] =(– 0.68 ± 0.04)) are consistent with the anti-correlation between metallicity and spin temperature that has been found earlier in damped Lyα systems.

  18. LOFAR insights into the epoch of reionization from the cross-power spectrum of 21 cm emission and galaxies

    NASA Astrophysics Data System (ADS)

    Wiersma, R. P. C.; Ciardi, B.; Thomas, R. M.; Harker, G. J. A.; Zaroubi, S.; Bernardi, G.; Brentjens, M.; de Bruyn, A. G.; Daiboo, S.; Jelic, V.; Kazemi, S.; Koopmans, L. V. E.; Labropoulos, P.; Martinez, O.; Mellema, G.; Offringa, A.; Pandey, V. N.; Schaye, J.; Veligatla, V.; Vedantham, H.; Yatawatta, S.

    2013-07-01

    Using a combination of N-body simulations, semi-analytic models and radiative transfer calculations, we have estimated the theoretical cross-power spectrum between galaxies and the 21 cm emission from neutral hydrogen during the epoch of reionization. In accordance with previous studies, we find that the 21 cm emission is initially correlated with haloes on large scales (≳30 Mpc), anticorrelated on intermediate (˜5 Mpc) and uncorrelated on small (≲3 Mpc) scales. This picture quickly changes as reionization proceeds and the two fields become anticorrelated on large scales. The normalization of the cross-power spectrum can be used to set constraints on the average neutral fraction in the intergalactic medium and its shape can be a powerful tool to study the topology of reionization. When we apply a drop-out technique to select galaxies and add to the 21 cm signal the noise expected from the LOw Frequency ARray (LOFAR) telescope, we find that while the normalization of the cross-power spectrum remains a useful tool for probing reionization, its shape becomes too noisy to be informative. On the other hand, for an Lyα Emitter (LAE) survey both the normalization and the shape of the cross-power spectrum are suitable probes of reionization. A closer look at a specific planned LAE observing program using Subaru Hyper-Suprime Cam reveals concerns about the strength of the 21 cm signal at the planned redshifts. If the ionized fraction at z ˜ 7 is lower than the one estimated here, then using the cross-power spectrum may be a useful exercise given that at higher redshifts and neutral fractions it is able to distinguish between two toy models with different topologies.

  19. PRECISE MEASUREMENT OF THE REIONIZATION OPTICAL DEPTH FROM THE GLOBAL 21 cm SIGNAL ACCOUNTING FOR COSMIC HEATING

    SciTech Connect

    Fialkov, Anastasia; Loeb, Abraham E-mail: aloeb@cfa.harvard.edu

    2016-04-10

    As a result of our limited data on reionization, the total optical depth for electron scattering, τ, limits precision measurements of cosmological parameters from the Cosmic Microwave Background (CMB). It was recently shown that the predicted 21 cm signal of neutral hydrogen contains enough information to reconstruct τ with sub-percent accuracy, assuming that the neutral gas was much hotter than the CMB throughout the entire epoch of reionization (EoR). Here we relax this assumption and use the global 21 cm signal alone to extract τ for realistic X-ray heating scenarios. We test our model-independent approach using mock data for a wide range of ionization and heating histories and show that an accurate measurement of the reionization optical depth at a sub-percent level is possible in most of the considered scenarios even when heating is not saturated during the EoR, assuming that the foregrounds are mitigated. However, we find that in cases where heating sources had hard X-ray spectra and their luminosity was close to or lower than what is predicted based on low-redshift observations, the global 21 cm signal alone is not a good tracer of the reionization history.

  20. Upper limits on the 21 cm power spectrum at z = 5.9 from quasar absorption line spectroscopy

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan C.; Greig, Bradley; Mesinger, Andrei

    2016-11-01

    We present upper limits on the 21 cm power spectrum at z = 5.9 calculated from the model-independent limit on the neutral fraction of the intergalactic medium of x_{H I} < 0.06 + 0.05 (1σ ) derived from dark pixel statistics of quasar absorption spectra. Using 21CMMC, a Markov chain Monte Carlo Epoch of Reionization analysis code, we explore the probability distribution of 21 cm power spectra consistent with this constraint on the neutral fraction. We present 99 per cent confidence upper limits of Δ2(k) < 10-20 mK2 over a range of k from 0.5 to 2.0 h Mpc-1, with the exact limit dependent on the sampled k mode. This limit can be used as a null test for 21 cm experiments: a detection of power at z = 5.9 in excess of this value is highly suggestive of residual foreground contamination or other systematic errors affecting the analysis.

  1. Line-of-Sight Anisotropies in the Cosmic Dawn and Epoch of Reionization 21-cm Power Spectrum

    NASA Astrophysics Data System (ADS)

    Majumdar, Suman; Datta, Kanan K.; Ghara, Raghunath; Mondal, Rajesh; Choudhury, T. Roy; Bharadwaj, Somnath; Ali, Sk. Saiyad; Datta, Abhirup

    2016-12-01

    The line-of-sight direction in the redshifted 21-cm signal coming from the cosmic dawn and the epoch of reionization is quite unique in many ways compared to any other cosmological signal. Different unique effects, such as the evolution history of the signal, non-linear peculiar velocities of the matter etc. will imprint their signature along the line-of-sight axis of the observed signal. One of the major goals of the future SKA-LOW radio interferometer is to observe the cosmic dawn and the epoch of reionization through this 21-cm signal. It is thus important to understand how these various effects affect the signal for its actual detection and proper interpretation. For more than one and half decades, various groups in India have been actively trying to understand and quantify the different line-of-sight effects that are present in this signal through analytical models and simulations. In many ways the importance of this sub-field under 21-cm cosmology have been identified, highlighted and pushed forward by the Indian community. In this article, we briefly describe their contribution and implication of these effects in the context of the future surveys of the cosmic dawn and the epoch of reionization that will be conducted by the SKA-LOW.

  2. Precise Measurement of the Reionization Optical Depth from the Global 21 cm Signal Accounting for Cosmic Heating

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Loeb, Abraham

    2016-04-01

    As a result of our limited data on reionization, the total optical depth for electron scattering, τ, limits precision measurements of cosmological parameters from the Cosmic Microwave Background (CMB). It was recently shown that the predicted 21 cm signal of neutral hydrogen contains enough information to reconstruct τ with sub-percent accuracy, assuming that the neutral gas was much hotter than the CMB throughout the entire epoch of reionization (EoR). Here we relax this assumption and use the global 21 cm signal alone to extract τ for realistic X-ray heating scenarios. We test our model-independent approach using mock data for a wide range of ionization and heating histories and show that an accurate measurement of the reionization optical depth at a sub-percent level is possible in most of the considered scenarios even when heating is not saturated during the EoR, assuming that the foregrounds are mitigated. However, we find that in cases where heating sources had hard X-ray spectra and their luminosity was close to or lower than what is predicted based on low-redshift observations, the global 21 cm signal alone is not a good tracer of the reionization history.

  3. e-MERLIN 21 cm constraints on the mass-loss rates of OB stars in Cyg OB2

    NASA Astrophysics Data System (ADS)

    Morford, J. C.; Fenech, D. M.; Prinja, R. K.; Blomme, R.; Yates, J. A.

    2016-11-01

    We present e-MERLIN 21 cm (L-band) observations of single luminous OB stars in the Cygnus OB2 association, from the Cyg OB2 Radio Survey Legacy programme. The radio observations potentially offer the most straightforward, least model-dependent, determinations of mass-loss rates, and can be used to help resolve current discrepancies in mass-loss rates via clumped and structured hot star winds. We report here that the 21 cm flux densities of O3 to O6 supergiant and giant stars are less than ˜70 μJy. These fluxes may be translated to `smooth' wind mass-loss upper limits of ˜4.4-4.8 × 10-6 M⊙ yr -1 for O3 supergiants and ≲2.9 × 10-6 M⊙ yr -1 for B0 to B1 supergiants. The first ever resolved 21 cm detections of the hypergiant (and luminous blue variable candidate) Cyg OB2 #12 are discussed; for multiple observations separated by 14 d, we detect an ˜69 per cent increase in its flux density. Our constraints on the upper limits for the mass-loss rates of evolved OB stars in Cyg OB2 support the model that the inner wind region close to the stellar surface (where Hα forms) is more clumped than the very extended geometric region sampled by our radio observations.

  4. Large scale maps of cropping intensity in Asia from MODIS

    NASA Astrophysics Data System (ADS)

    Gray, J. M.; Friedl, M. A.; Frolking, S. E.; Ramankutty, N.; Nelson, A.

    2013-12-01

    for linear regressions estimated for local windows, and constrained by the EVI amplitude and length of crop cycles that are identified. The procedure can be used to map seasonal or long-term average cropping strategies, and to characterize changes in cropping intensity over longer time periods. The datasets produced using this method therefore provide information related to global cropping systems, and more broadly, provide important information that is required to ensure sustainable management of Earth's resources and ensure food security. To test our algorithm, we applied it to time series of MODIS EVI images over Asia from 2000-2012. Our results demonstrate the utility of multi-temporal remote sensing for characterizing multi-cropping practices in some of the most important and intensely agricultural regions in the world. To evaluate our approach, we compared results from MODIS to field-scale survey data at the pixel scale, and agricultural inventory statistics at sub-national scales. We then mapped changes in multi-cropped area in Asia from the early MODIS period (2001-2004) to present (2009-2012), and characterizes the magnitude and location of changes in cropping intensity over the last 12 years. We conclude with a discussion of the challenges, future improvements, and broader impacts of this work.

  5. Observational challenges in Lyα intensity mapping

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Yue, B.; Ferrara, A.

    2016-12-01

    Intensity mapping (IM) is sensitive to the cumulative line emission of galaxies. As such, it represents a promising technique for statistical studies of galaxies fainter than the limiting magnitude of traditional galaxy surveys. The strong hydrogen Lyα line is the primary target for such an experiment, as its intensity is linked to star formation activity and the physical state of the interstellar and intergalactic medium. However, to extract the meaningful information, one has to solve the confusion problems caused by interloping lines from foreground galaxies. We discuss here the challenges for a Lyα IM experiment targeting z > 4 sources. We find that the Lyα power spectrum can be, in principle, easily (marginally) obtained with a 40 cm space telescope in a few days of observing time up to z ≲ 8 (z ˜ 10) assuming that the interloping lines (e.g. Hα, [O II], [O III] lines) can be efficiently removed. We show that interlopers can be removed by using an ancillary photometric galaxy survey with limiting AB mag ˜26 in the near-infrared bands (Y, J, H, or K). This would enable detection of the Lyα signal from 5 < z < 9 faint sources. However, if a [C II] IM experiment is feasible, by cross-correlating the Lyα with the [C II] signal, the required depth of the galaxy survey can be decreased to AB mag ˜24. This would bring the detection at the reach of future facilities working in close synergy.

  6. From Darkness to Light: Observing the First Stars and Galaxies with the Redshifted 21-cm Line using the Dark Ages Radio Explorer

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Lazio, Joseph; Bowman, Judd D.; Bradley, Richard F.; Datta, Abhirup; Furlanetto, Steven; Jones, Dayton L.; Kasper, Justin; Loeb, Abraham; Harker, Geraint

    2015-01-01

    The Dark Ages Radio Explorer (DARE) will reveal when the first stars, black holes, and galaxies formed in the early Universe and will define their characteristics, from the Dark Ages (z=35) to the Cosmic Dawn (z=11). This epoch of the Universe has never been directly observed. The DARE science instrument is composed of electrically-short bi-conical dipole antennas, a correlation receiver, and a digital spectrometer that measures the sky-averaged, low frequency (40-120 MHz) spectral features from the highly redshifted 21-cm HI line that surrounds the first objects. These observations are possible because DARE will orbit the Moon at an altitude of 125 km and takes data when it is above the radio-quiet, ionosphere-free, solar-shielded lunar farside. DARE executes the small-scale mission described in the NASA Astrophysics Roadmap (p. 83): 'mapping the Universe's hydrogen clouds using 21-cm radio wavelengths via lunar orbiter from the farside of the Moon'. This mission will address four key science questions: (1) When did the first stars form and what were their characteristics? (2) When did the first accreting black holes form and what was their characteristic mass? (3) When did reionization begin? (4) What surprises emerged from the Dark Ages (e.g., Dark Matter decay). DARE uniquely complements other major telescopes including Planck, JWST, and ALMA by bridging the gap between the smooth Universe seen via the CMB and rich web of galaxy structures seen with optical/IR/mm telescopes. Support for the development of this mission concept was provided by the Office of the Director, NASA Ames Research Center and by JPL/Caltech.

  7. Simulations for single-dish intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Bigot-Sazy, M.-A.; Dickinson, C.; Battye, R. A.; Browne, I. W. A.; Ma, Y.-Z.; Maffei, B.; Noviello, F.; Remazeilles, M.; Wilkinson, P. N.

    2015-12-01

    H I intensity mapping is an emerging tool to probe dark energy. Observations of the redshifted H I signal will be contaminated by instrumental noise, atmospheric and Galactic foregrounds. The latter is expected to be four orders of magnitude brighter than the H I emission we wish to detect. We present a simulation of single-dish observations including an instrumental noise model with 1/f and white noise, and sky emission with a diffuse Galactic foreground and H I emission. We consider two foreground cleaning methods: spectral parametric fitting and principal component analysis. For a smooth frequency spectrum of the foreground and instrumental effects, we find that the parametric fitting method provides residuals that are still contaminated by foreground and 1/f noise, but the principal component analysis can remove this contamination down to the thermal noise level. This method is robust for a range of different models of foreground and noise, and so constitutes a promising way to recover the H I signal from the data. However, it induces a leakage of the cosmological signal into the subtracted foreground of around 5 per cent. The efficiency of the component separation methods depends heavily on the smoothness of the frequency spectrum of the foreground and the 1/f noise. We find that as long as the spectral variations over the band are slow compared to the channel width, the foreground cleaning method still works.

  8. A SENSITIVITY AND ARRAY-CONFIGURATION STUDY FOR MEASURING THE POWER SPECTRUM OF 21 cm EMISSION FROM REIONIZATION

    SciTech Connect

    Parsons, Aaron; Pober, Jonathan; McQuinn, Matthew; Jacobs, Daniel; Aguirre, James

    2012-07-01

    Telescopes aiming to measure 21 cm emission from the Epoch of Reionization must toe a careful line, balancing the need for raw sensitivity against the stringent calibration requirements for removing bright foregrounds. It is unclear what the optimal design is for achieving both of these goals. Via a pedagogical derivation of an interferometer's response to the power spectrum of 21 cm reionization fluctuations, we show that even under optimistic scenarios first-generation arrays will yield low-signal-to-noise detections, and that different compact array configurations can substantially alter sensitivity. We explore the sensitivity gains of array configurations that yield high redundancy in the uv-plane-configurations that have been largely ignored since the advent of self-calibration for high-dynamic-range imaging. We first introduce a mathematical framework to generate optimal minimum-redundancy configurations for imaging. We contrast the sensitivity of such configurations with high-redundancy configurations, finding that high-redundancy configurations can improve power-spectrum sensitivity by more than an order of magnitude. We explore how high-redundancy array configurations can be tuned to various angular scales, enabling array sensitivity to be directed away from regions of the uv-plane (such as the origin) where foregrounds are brighter and instrumental systematics are more problematic. We demonstrate that a 132 antenna deployment of the Precision Array for Probing the Epoch of Reionization observing for 120 days in a high-redundancy configuration will, under ideal conditions, have the requisite sensitivity to detect the power spectrum of the 21 cm signal from reionization at a 3{sigma} level at k < 0.25 h Mpc{sup -1} in a bin of {Delta}ln k = 1. We discuss the tradeoffs of low- versus high-redundancy configurations.

  9. INTERPRETING THE GLOBAL 21-cm SIGNAL FROM HIGH REDSHIFTS. II. PARAMETER ESTIMATION FOR MODELS OF GALAXY FORMATION

    SciTech Connect

    Mirocha, Jordan; Burns, Jack O.; Harker, Geraint J. A.

    2015-11-01

    Following our previous work, which related generic features in the sky-averaged (global) 21-cm signal to properties of the intergalactic medium, we now investigate the prospects for constraining a simple galaxy formation model with current and near-future experiments. Markov-Chain Monte Carlo fits to our synthetic data set, which includes a realistic galactic foreground, a plausible model for the signal, and noise consistent with 100 hr of integration by an ideal instrument, suggest that a simple four-parameter model that links the production rate of Lyα, Lyman-continuum, and X-ray photons to the growth rate of dark matter halos can be well-constrained (to ∼0.1 dex in each dimension) so long as all three spectral features expected to occur between 40 ≲ ν/MHz ≲ 120 are detected. Several important conclusions follow naturally from this basic numerical result, namely that measurements of the global 21-cm signal can in principle (i) identify the characteristic halo mass threshold for star formation at all redshifts z ≳ 15, (ii) extend z ≲ 4 upper limits on the normalization of the X-ray luminosity star formation rate (L{sub X}–SFR) relation out to z ∼ 20, and (iii) provide joint constraints on stellar spectra and the escape fraction of ionizing radiation at z ∼ 12. Though our approach is general, the importance of a broadband measurement renders our findings most relevant to the proposed Dark Ages Radio Explorer, which will have a clean view of the global 21-cm signal from ∼40 to 120 MHz from its vantage point above the radio-quiet, ionosphere-free lunar far-side.

  10. INVISIBLE ACTIVE GALACTIC NUCLEI. II. RADIO MORPHOLOGIES AND FIVE NEW H i 21 cm ABSORPTION LINE DETECTORS

    SciTech Connect

    Yan, Ting; Stocke, John T.; Darling, Jeremy; Momjian, Emmanuel; Sharma, Soniya; Kanekar, Nissim

    2016-03-15

    This is the second paper directed toward finding new highly redshifted atomic and molecular absorption lines at radio frequencies. To this end, we selected a sample of 80 candidates for obscured radio-loud active galactic nuclei (AGNs) and presented their basic optical/near-infrared (NIR) properties in Paper I. In this paper, we present both high-resolution radio continuum images for all of these sources and H i 21 cm absorption spectroscopy for a few selected sources in this sample. A-configuration 4.9 and 8.5 GHz Very Large Array continuum observations find that 52 sources are compact or have substantial compact components with size <0.″5 and flux densities >0.1 Jy at 4.9 GHz. The 36 most compact sources were then observed with the Very Long Baseline Array at 1.4 GHz. One definite and 10 candidate Compact Symmetric Objects (CSOs) are newly identified, which is a detection rate of CSOs ∼three times higher than the detection rate previously found in purely flux-limited samples. Based on possessing compact components with high flux densities, 60 of these sources are good candidates for absorption-line searches. Twenty-seven sources were observed for H i 21 cm absorption at their photometric or spectroscopic redshifts with only six detections (five definite and one tentative). However, five of these were from a small subset of six CSOs with pure galaxy optical/NIR spectra (i.e., any AGN emission is obscured) and for which accurate spectroscopic redshifts place the redshifted 21 cm line in a radio frequency intereference (RFI)-free spectral “window” (i.e., the percentage of H i 21 cm absorption-line detections could be as high as ∼90% in this sample). It is likely that the presence of ubiquitous RFI and the absence of accurate spectroscopic redshifts preclude H i detections in similar sources (only 1 detection out of the remaining 22 sources observed, 13 of which have only photometric redshifts); that is, H i absorption may well be present but is masked by

  11. Multi-redshift limits on the 21cm power spectrum from PAPER 64: XRays in the early universe

    NASA Astrophysics Data System (ADS)

    Kolopanis, Matthew; Jacobs, Danny; PAPER Collaboration

    2016-06-01

    Here we present new constraints on 21cm emission from cosmic reionization from the 64 element deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). These results extend the single redshift 8.4 result presented in Ali et al 2015 to include redshifts from 7.3 to 10.9. These new limits offer as much as a factor of 4 improvement in sensitivity compared to previous 32 element PAPER results by Jacobs et al (2015). Using these limits we place constraints on a parameterized model of heating due to XRays emitted by early collapsed objects.

  12. The TMS Map Scales with Increased Stimulation Intensity and Muscle Activation.

    PubMed

    van de Ruit, Mark; Grey, Michael J

    2016-01-01

    One way to study cortical organisation, or its reorganisation, is to use transcranial magnetic stimulation (TMS) to construct a map of corticospinal excitability. TMS maps are reported to be acquired with a wide variety of stimulation intensities and levels of muscle activation. Whilst MEPs are known to increase both with stimulation intensity and muscle activation, it remains to be established what the effect of these factors is on the map's centre of gravity (COG), area, volume and shape. Therefore, the objective of this study was to systematically examine the effect of stimulation intensity and muscle activation on these four key map outcome measures. In a first experiment, maps were acquired with a stimulation intensity of 110, 120 and 130% of resting threshold. In a second experiment, maps were acquired at rest and at 5, 10, 20 and 40% of maximum voluntary contraction. Map area and map volume increased with both stimulation intensity (P < 0.01) and muscle activation (P < 0.01). Neither the COG nor the map shape changed with either stimulation intensity or muscle activation (P > 0.09 in all cases). This result indicates the map simply scales with stimulation intensity and muscle activation.

  13. Modified Mercalli Intensity Maps for the 1868 Hayward Earthquake Plotted in ShakeMap Format

    USGS Publications Warehouse

    Boatwright, John; Bundock, Howard

    2008-01-01

    To construct the Modified Mercalli Intensity (MMI) ShakeMap for the 1868 Hayward earthquake, we started with two sets of damage descriptions and felt reports. The first set of 100 sites was compiled by A.A. Bullock in the Lawson (1908) report on the 1906 San Francisco earthquake. The second set of 45 sites was compiled by Toppozada et al. (1981) from an extensive search of newspaper archives. We supplemented these two sets of reports with new observations from 30 sites using surveys of cemetery damage, reports of damage to historic adobe structures, pioneer narratives, and reports from newspapers that Toppozada et al. (1981) did not retrieve. The Lawson (1908) and Toppozada et al. (1981) compilations and our contributions are assembled in the Site List.

  14. What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization

    SciTech Connect

    Pober, Jonathan C.; Morales, Miguel F.; Liu, Adrian; McQuinn, Matthew; Parsons, Aaron R.; Dillon, Joshua S.; Hewitt, Jacqueline N.; Tegmark, Max; Aguirre, James E.; Bowman, Judd D.; Jacobs, Daniel C.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Werthimer, Dan J.

    2014-02-20

    A number of experiments are currently working toward a measurement of the 21 cm signal from the epoch of reionization (EoR). Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by the next generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We focus primarily on an instrument modeled after the ∼0.1 km{sup 2} collecting area Hydrogen Epoch of Reionization Array concept design and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described 'wedge' footprint in k space. Uncertainties in the reionization history are accounted for using a series of simulations that vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the intergalactic medium. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that 0.1 km{sup 2} of collecting area is enough to ensure a very high significance (≳ 30σ) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented.

  15. Simulating the z = 3.35 HI 21-cm Visibility Signal for the Ooty Wide Field Array (OWFA)

    NASA Astrophysics Data System (ADS)

    Chatterjee, Suman; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2017-03-01

    The upcoming Ooty Wide Field Array (OWFA) will operate at 326.5 MHz which corresponds to the redshifted 21-cm signal from neutral hydrogen (HI) at z = 3.35. We present two different prescriptions to simulate this signal and calculate the visibilities expected in radio-interferometric observations with OWFA. In the first method we use an input model for the expected 21-cm power spectrum to directly simulate different random realizations of the brightness temperature fluctuations and calculate the visibilities. This method, which models the HI signal entirely as a diffuse radiation, is completely oblivious to the discrete nature of the astrophysical sources which host the HI. While each discrete source subtends an angle that is much smaller than the angular resolution of OWFA, the velocity structure of the HI inside the individual sources is well within the reach of OWFA's frequency resolution and this is expected to have an impact on the observed HI signal. The second prescription is based on cosmological N-body simulations. Here we identify each simulation particle with a source that hosts the HI, and we have the freedom to implement any desired line profile for the HI emission from the individual sources. Implementing a simple model for the line profile, we have generated several random realizations of the complex visibilities. Correlations between the visibilities measured at different baselines and channels provides an unique method to quantify the statistical properties of the HI signal. We have used this to quantify the results of our simulations, and explore the relation between the expected visibility correlations and the underlying HI power spectrum.

  16. Spatially Extended 21 cm Signal from Strongly Clustered Uv and X-Ray Sources in the Early Universe

    NASA Astrophysics Data System (ADS)

    Ahn, Kyungjin; Xu, Hao; Norman, Michael L.; Alvarez, Marcelo A.; Wise, John H.

    2015-03-01

    We present our prediction for the local 21 cm differential brightness temperature (δTb) from a set of strongly clustered sources of Population III (Pop III) and II (Pop II) objects in the early universe, by a numerical simulation of their formation and radiative feedback. These objects are located inside a highly biased environment, which is a rare, high-density peak (“Rarepeak”) extending to ∼7 comoving Mpc. We study the impact of ultraviolet and X-ray photons on the intergalactic medium (IGM) and the resulting δTb, when Pop III stars are assumed to emit X-ray photons by forming X-ray binaries very efficiently. We parameterize the rest-frame spectral energy distribution of X-ray photons, which regulates X-ray photon-trapping, IGM-heating, secondary Lyα pumping and the resulting morphology of δTb. A combination of emission (δTb > 0) and absorption (δTb < 0) regions appears in varying amplitudes and angular scales. The boost of the signal by the high-density environment (δ ∼ 0.64) and on a relatively large scale combines to make Rarepeak a discernible, spatially extended (θ ∼ 10‧) object for 21 cm observation at 13 ≲ z ≲ 17, which is found to be detectable as a single object by SKA with integration time of ∼1000 hr. Power spectrum analysis by some of the SKA precursors (Low Frequency Array, Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization) of such rare peaks is found to be difficult due to the rarity of these peaks, and the contribution only by these rare peaks to the total power spectrum remains subdominant compared to that by all astrophysical sources.

  17. Mapping tillage intensity by integrating multiple remote sensing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tillage practices play an important role in the sustainable agriculture system. Conservative tillage practice can help to reduce soil erosion, increase soil fertility and improve water quality. Tillage practices could be applied at different times with different intensity depending on the local weat...

  18. A Search for Mass Loss on the Cepheid Instability Strip using H i 21 cm Line Observations

    NASA Astrophysics Data System (ADS)

    Matthews, L. D.; Marengo, M.; Evans, N. R.

    2016-12-01

    We present the results of a search for H i 21 cm line emission from the circumstellar environments of four Galactic Cepheids (RS Pup, X Cyg, ζ Gem, and T Mon) based on observations with the Karl G. Jansky Very Large Array. The observations were aimed at detecting gas associated with previous or ongoing mass loss. Near the long-period Cepheid T Mon, we report the detection of a partial shell-like structure whose properties appear consistent with originating from an earlier epoch of Cepheid mass loss. At the distance of T Mon, the nebula would have a mass (H i+He) of ˜ 0.5{M}⊙ , or ˜6% of the stellar mass. Assuming that one-third of the nebular mass comprises swept-up interstellar gas, we estimate an implied mass-loss rate of \\dot{M}˜ (0.6{--}2)× {10}-5 {M}⊙ yr-1. No clear signatures of circumstellar emission were found toward ζ Gem, RS Pup, or X Cyg, although in each case, line-of-sight confusion compromised portions of the spectral band. For the undetected stars, we derive model-dependent 3σ upper limits on the mass-loss rates, averaged over their lifetimes on the instability strip, of ≲ (0.3{--}6)× {10}-6 {M}⊙ yr-1 and estimate the total amount of mass lost to be less than a few percent of the stellar mass.

  19. Effects of Antenna Beam Chromaticity on Redshifted 21 cm Power Spectrum and Implications for Hydrogen Epoch of Reionization Array

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Parsons, Aaron R.; DeBoer, David R.; Bowman, Judd D.; Ewall-Wice, Aaron M.; Neben, Abraham R.; Patra, Nipanjana

    2016-07-01

    Unaccounted for systematics from foregrounds and instruments can severely limit the sensitivity of current experiments from detecting redshifted 21 cm signals from the Epoch of Reionization (EoR). Upcoming experiments are faced with a challenge to deliver more collecting area per antenna element without degrading the data with systematics. This paper and its companions show that dishes are viable for achieving this balance using the Hydrogen Epoch of Reionization Array (HERA) as an example. Here, we specifically identify spectral systematics associated with the antenna power pattern as a significant detriment to all EoR experiments which causes the already bright foreground power to leak well beyond ideal limits and contaminate the otherwise clean EoR signal modes. A primary source of this chromaticity is reflections in the antenna-feed assembly and between structures in neighboring antennas. Using precise foreground simulations taking wide-field effects into account, we provide a generic framework to set cosmologically motivated design specifications on these reflections to prevent further EoR signal degradation. We show that HERA will not be impeded by such spectral systematics and demonstrate that even in a conservative scenario that does not perform removal of foregrounds, HERA will detect the EoR signal in line-of-sight k-modes, {k}\\parallel ≳ 0.2 h Mpc-1, with high significance. Under these conditions, all baselines in a 19-element HERA layout are capable of detecting EoR over a substantial observing window on the sky.

  20. Calibration of the EDGES High-band Receiver to Observe the Global 21 cm Signature from the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Monsalve, Raul A.; Rogers, Alan E. E.; Bowman, Judd D.; Mozdzen, Thomas J.

    2017-01-01

    The EDGES High-Band experiment aims to detect the sky-average brightness temperature of the 21 cm signal from the epoch of reionization in the redshift range 14.8≳ z≳ 6.5. To probe this redshifted signal, EDGES High-Band conducts single-antenna measurements in the frequency range 90–190 MHz from the Murchison Radio-astronomy Observatory in Western Australia. In this paper, we describe the current strategy for calibration of the EDGES High-Band receiver and report calibration results for the instrument used in the 2015–2016 observational campaign. We propagate uncertainties in the receiver calibration measurements to the antenna temperature using a Monte Carlo approach. We define a performance objective of 1 mK residual rms after modeling foreground subtraction from a fiducial temperature spectrum using a five-term polynomial. Most of the calibration uncertainties yield residuals of 1 mK or less at 95 % confidence. However, current uncertainties in the antenna and receiver reflection coefficients can lead to residuals of up to 20 mK even in low-foreground sky regions. These dominant residuals could be reduced by (1) improving the accuracy in reflection measurements, especially their phase, (2) improving the impedance match at the antenna-receiver interface, and (3) decreasing the changes with frequency of the antenna reflection phase.

  1. A Giant Metrewave Radio Telescope search for associated H I 21 cm absorption in high-redshift flat-spectrum sources

    NASA Astrophysics Data System (ADS)

    Aditya, J. N. H. S.; Kanekar, Nissim; Kurapati, Sushma

    2016-02-01

    We report results from a Giant Metrewave Radio Telescope search for `associated' redshifted H I 21 cm absorption from 24 active galactic nuclei (AGNs), at 1.1 < z < 3.6, selected from the Caltech-Jodrell Bank Flat-spectrum (CJF) sample. 22 out of 23 sources with usable data showed no evidence of absorption, with typical 3σ optical depth detection limits of ≈0.01 at a velocity resolution of ≈30 km s-1. A single tentative absorption detection was obtained at z ≈ 3.530 towards TXS 0604+728. If confirmed, this would be the highest redshift at which H I 21 cm absorption has ever been detected. Including 29 CJF sources with searches for redshifted H I 21 cm absorption in the literature, mostly at z < 1, we construct a sample of 52 uniformly selected flat-spectrum sources. A Peto-Prentice two-sample test for censored data finds (at ≈3σ significance) that the strength of H I 21 cm absorption is weaker in the high-z sample than in the low-z sample; this is the first statistically significant evidence for redshift evolution in the strength of H I 21 cm absorption in a uniformly selected AGN sample. However, the two-sample test also finds that the H I 21 cm absorption strength is higher in AGNs with low ultraviolet or radio luminosities, at ≈3.4σ significance. The fact that the higher luminosity AGNs of the sample typically lie at high redshifts implies that it is currently not possible to break the degeneracy between AGN luminosity and redshift evolution as the primary cause of the low H I 21 cm opacities in high-redshift, high-luminosity AGNs.

  2. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA - 4/07

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  3. Mapping and analysing cropland use intensity from a NPP perspective

    NASA Astrophysics Data System (ADS)

    Niedertscheider, Maria; Kastner, Thomas; Fetzel, Tamara; Haberl, Helmut; Kroisleitner, Christine; Plutzar, Christoph; Erb, Karl-Heinz

    2016-01-01

    Meeting expected surges in global biomass demand while protecting pristine ecosystems likely requires intensification of current croplands. Yet many uncertainties relate to the potentials for cropland intensification, mainly because conceptualizing and measuring land use intensity is intricate, particularly at the global scale. We present a spatially explicit analysis of global cropland use intensity, following an ecological energy flow perspective. We analyze (a) changes of net primary production (NPP) from the potential system (i.e. assuming undisturbed vegetation) to croplands around 2000 and relate these changes to (b) inputs of (N) fertilizer and irrigation and (c) to biomass outputs, allowing for a three dimensional focus on intensification. Globally the actual NPP of croplands, expressed as per cent of their potential NPP (NPPact%), amounts to 77%. A mix of socio-economic and natural factors explains the high spatial variation which ranges from 22.6% to 416.0% within the inner 95 percentiles. NPPact% is well below NPPpot in many developing, (Sub-) Tropical regions, while it massively surpasses NPPpot on irrigated drylands and in many industrialized temperate regions. The interrelations of NPP losses (i.e. the difference between NPPact and NPPpot), agricultural inputs and biomass harvest differ substantially between biogeographical regions. Maintaining NPPpot was particularly N-intensive in forest biomes, as compared to cropland in natural grassland biomes. However, much higher levels of biomass harvest occur in forest biomes. We show that fertilization loads correlate with NPPact% linearly, but the relation gets increasingly blurred beyond a level of 125 kgN ha-1. Thus, large potentials exist to improve N-efficiency at the global scale, as only 10% of global croplands are above this level. Reallocating surplus N could substantially reduce NPP losses by up to 80% below current levels and at the same time increase biomass harvest by almost 30%. However, we

  4. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  5. H I 21-cm absorption survey of quasar-galaxy pairs: distribution of cold gas around z < 0.4 galaxies

    NASA Astrophysics Data System (ADS)

    Dutta, R.; Srianand, R.; Gupta, N.; Momjian, E.; Noterdaeme, P.; Petitjean, P.; Rahmani, H.

    2017-02-01

    We present the results from our survey of H I 21-cm absorption, using Giant Metrewave Radio Telescope, Very Large Array and Westerbork Radio Synthesis Telescope, in a sample of 55 z < 0.4 galaxies towards radio sources with impact parameters (b) in the range ˜0-35 kpc. In our primary sample (defined for statistical analyses) of 40 quasar-galaxy pairs, probed by 45 sightlines, we have found seven H I 21-cm absorption detections, two of which are reported here for the first time. Combining our primary sample with measurements having similar optical depth sensitivity (∫τdv ≤ 0.3 km s-1) from the literature, we find a weak anti-correlation (rank correlation coefficient = -0.20 at 2.42σ level) between ∫τdv and b, consistent with previous literature results. The covering factor of H I 21-cm absorbers (C21) is estimated to be 0.24^{+0.12}_{-0.08} at b ≤ 15 kpc and 0.06^{+0.09}_{-0.04} at b = 15-35 kpc. ∫τdv and C21 show similar declining trend with radial distance along the galaxy's major axis and distances scaled with the effective H I radius. There is also tentative indication that most of the H I 21-cm absorbers could be co-planar with the extended H I discs. No significant dependence of ∫τdv and C21 on galaxy luminosity, stellar mass, colour and star formation rate is found, though the H I 21-cm absorbing gas cross-section may be larger for the luminous galaxies. The higher detection rate (by a factor of ˜4) of H I 21-cm absorption in z < 1 damped Lyman-α systems compared to the quasar-galaxy pairs indicates towards small covering factor and patchy distribution of cold gas clouds around low-z galaxies.

  6. The Effects of the Ionosphere on Ground-based Detection of the Global 21 cm Signal from the Cosmic Dawn and the Dark Ages

    NASA Astrophysics Data System (ADS)

    Datta, Abhirup; Bradley, Richard; Burns, Jack O.; Harker, Geraint; Komjathy, Attila; Lazio, T. Joseph W.

    2016-11-01

    Detection of the global H i 21 cm signal from the Cosmic Dawn and the Epoch of Reionization is the key science driver for several ongoing ground-based and future ground-/space-based experiments. The crucial spectral features in the global 21 cm signal (turning points) occur at low radio frequencies ≲ 100 {{MHz}}. In addition to the human-generated radio frequency interference, Earth’s ionosphere drastically corrupts low-frequency radio observations from the ground. In this paper, we examine the effects of time-varying ionospheric refraction, absorption, and thermal emission at these low radio frequencies and their combined effect on any ground-based global 21 cm experiment. It should be noted that this is the first study of the effect of a dynamic ionosphere on global 21 cm experiments. The fluctuations in the ionosphere are influenced by solar activity with flicker noise characteristics. The same characteristics are reflected in the ionospheric corruption to any radio signal passing through the ionosphere. As a result, any ground-based observations of the faint global 21 cm signal are corrupted by flicker noise (or 1/f noise, where f is the dynamical frequency) which scales as {ν }-2 (where ν is the frequency of radio observation) in the presence of a bright galactic foreground (\\propto {ν }-s, where s is the radio spectral index). Hence, the calibration of the ionosphere for any such experiment is critical. Any attempt to calibrate the ionospheric effects will be subject to the inaccuracies in the current ionospheric measurements using Global Positioning System (GPS) ionospheric measurements, riometer measurements, ionospheric soundings, etc. Even considering an optimistic improvement in the accuracy of GPS-total electron content measurements, we conclude that Earth’s ionosphere poses a significant challenge in the absolute detection of the global 21 cm signal below 100 MHz.

  7. Continuous intensity map optimization (CIMO): a novel approach to leaf sequencing in step and shoot IMRT.

    PubMed

    Cao, Daliang; Earl, Matthew A; Luan, Shuang; Shepard, David M

    2006-04-01

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases were selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle3 treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.

  8. A Fourth H I 21 cm Absorption System in the Sight Line of MG J0414+0534: A Record for Intervening Absorbers

    NASA Astrophysics Data System (ADS)

    Tanna, A.; Curran, S. J.; Whiting, M. T.; Webb, J. K.; Bignell, C.

    2013-08-01

    We report the detection of a strong H I 21 cm absorption system at z = 0.5344, as well as a candidate system at z = 0.3389, in the sight line toward the z = 2.64 quasar MG J0414+0534. This, in addition to the absorption at the host redshift and the other two intervening absorbers, takes the total to four (possibly five). The previous maximum number of 21 cm absorbers detected along a single sight line is two and so we suspect that this number of gas-rich absorbers is in some way related to the very red color of the background source. Despite this, no molecular gas (through OH absorption) has yet been detected at any of the 21 cm redshifts, although, from the population of 21 cm absorbers as a whole, there is evidence for a weak correlation between the atomic line strength and the optical-near-infrared color. In either case, the fact that so many gas-rich galaxies (likely to be damped Lyα absorption systems) have been found along a single sight line toward a highly obscured source may have far-reaching implications for the population of faint galaxies not detected in optical surveys, a possibility which could be addressed through future wide-field absorption line surveys with the Square Kilometer Array.

  9. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    SciTech Connect

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  10. Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey.

    PubMed

    Tanji, Kazuyo; Leopold, David A; Ye, Frank Q; Zhu, Charles; Malloy, Megan; Saunders, Richard C; Mishkin, Mortimer

    2010-01-01

    The monkey's auditory cortex includes a core region on the supratemporal plane (STP) made up of the tonotopically organized areas A1, R, and RT, together with a surrounding belt and a lateral parabelt region. The functional studies that yielded the tonotopic maps and corroborated the anatomical division into core, belt, and parabelt typically used low-amplitude pure tones that were often restricted to threshold-level intensities. Here we used functional magnetic resonance imaging in awake rhesus monkeys to determine whether, and if so how, the tonotopic maps and the pattern of activation in core, belt, and parabelt are affected by systematic changes in sound intensity. Blood oxygenation level-dependent (BOLD) responses to groups of low- and high-frequency pure tones 3-4 octaves apart were measured at multiple sound intensity levels. The results revealed tonotopic maps in the auditory core that reversed at the putative areal boundaries between A1 and R and between R and RT. Although these reversals of the tonotopic representations were present at all intensity levels, the lateral spread of activation depended on sound amplitude, with increasing recruitment of the adjacent belt areas as the intensities increased. Tonotopic organization along the STP was also evident in frequency-specific deactivation (i.e. "negative BOLD"), an effect that was intensity-specific as well. Regions of positive and negative BOLD were spatially interleaved, possibly reflecting lateral inhibition of high-frequency areas during activation of adjacent low-frequency areas, and vice versa. These results, which demonstrate the strong influence of tonal amplitude on activation levels, identify sound intensity as an important adjunct parameter for mapping the functional architecture of auditory cortex.

  11. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  12. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography.

    PubMed

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A; White, Thomas A; Spence, John C H; Chapman, Henry N; Barty, Anton

    2014-07-17

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure 'three-dimensional merging'. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies.

  13. Infrared mapping of ultrasound fields generated by medical transducers: Feasibility of determining absolute intensity levels

    PubMed Central

    Khokhlova, Vera A.; Shmeleva, Svetlana M.; Gavrilov, Leonid R.; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam

    2013-01-01

    Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm2 to at least 50 W/cm2. Significantly higher intensities could be measured simply by reducing the duty cycle. PMID:23927199

  14. Infrared mapping of ultrasound fields generated by medical transducers: feasibility of determining absolute intensity levels.

    PubMed

    Khokhlova, Vera A; Shmeleva, Svetlana M; Gavrilov, Leonid R; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam

    2013-08-01

    Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm(2) to at least 50 W/cm(2). Significantly higher intensities could be measured simply by reducing the duty cycle.

  15. Holographic beam mapping of the CHIME pathfinder array

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Newburgh, Laura B.; Amiri, Mandana; Bandura, Kevin; Cliche, Jean-François; Connor, Liam; Deng, Meiling; Denman, Nolan; Dobbs, Matt; Fandino, Mateus; Gilbert, Adam J.; Good, Deborah; Halpern, Mark; Hanna, David; Hincks, Adam D.; Hinshaw, Gary; Höfer, Carolin; Johnson, Andre M.; Landecker, Tom L.; Masui, Kiyoshi W.; Mena Parra, Juan; Oppermann, Niels; Pen, Ue-Li; Peterson, Jeffrey B.; Recnik, Andre; Robishaw, Timothy; Shaw, J. Richard; Siegel, Seth; Sigurdson, Kris; Smith, Kendrick; Storer, Emilie; Tretyakov, Ian; Van Gassen, Kwinten; Vanderlinde, Keith; Wiebe, Donald

    2016-08-01

    The Canadian Hydrogen Intensity Mapping Experiment (CHIME) Pathfinder radio telescope is currently surveying the northern hemisphere between 400 and 800 MHz. By mapping the large scale structure of neutral hydrogen through its redshifted 21 cm line emission between z 0.8-2.5 CHIME will contribute to our understanding of Dark Energy. Bright astrophysical foregrounds must be separated from the neutral hydrogen signal, a task which requires precise characterization of the polarized telescope beams. Using the DRAO John A. Galt 26 m telescope, we have developed a holography instrument and technique for mapping the CHIME Pathfinder beams. We report the status of the instrument and initial results of this effort.

  16. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  17. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  18. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  19. Giant Metrewave Radio Telescope detection of associated H I 21-cm absorption at z = 1.2230 towards TXS 1954+513

    NASA Astrophysics Data System (ADS)

    Aditya, J. N. H. S.; Kanekar, Nissim; Prochaska, J. Xavier; Day, Brandon; Lynam, Paul; Cruz, Jocelyn

    2017-03-01

    We have used the 610-MHz receivers of the Giant Metrewave Radio Telescope (GMRT) to detect associated H I 21-cm absorption from the z = 1.2230 blazar TXS 1954+513. The GMRT H I 21-cm absorption is likely to arise against either the milliarcsecond-scale core or the one-sided milliarcsecond-scale radio jet, and is blueshifted by ≈328 km s-1 from the blazar redshift. This is consistent with a scenario in which the H I cloud giving rise to the absorption is being driven outwards by the radio jet. The integrated H I 21-cm optical depth is (0.716 ± 0.037) km s-1, implying a high H I column density, N_{H I} = (1.305 ± 0.067) × ({ T_s/100 K}) × 10^{20} cm-2, for an assumed H I spin temperature of 100 K. We use Nickel Telescope photometry of TXS 1954+513 to infer a high rest-frame 1216 Å luminosity of (4.1 ± 1.2) × 1023 W Hz-1. The z = 1.2230 absorber towards TXS 1954+513 is only the fifth case of a detection of associated H I 21-cm absorption at z > 1, and is also the first case of such a detection towards an active galactic nucleus (AGN) with a rest-frame ultraviolet (UV) luminosity ≫1023 W Hz-1, demonstrating that neutral hydrogen can survive in AGN environments in the presence of high UV luminosities.

  20. Constraining the population of radio-loud active galactic nuclei at high redshift with the power spectrum of the 21 cm Forest

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Mesinger, Andrei; Hewitt, Jacqueline N.

    2014-06-01

    The 21 cm forest, the absorption by the intergalactic medium (IGM) towards a high redshift radio-loud source, is a probe of the thermal state of the IGM. To date, the literature has focused on line-of-sight spectral studies of a single quasar known to have a large redshift. We instead examine many sources in a wide field of view, and show that the imprint from the 21 cm forest absorption of these sources is detectible in the power spectrum. The properties of the power spectrum can reveal information on the population of the earliest radio loud sources that may have existed during the pre-reionization epoch at z>10.Using semi-numerical simulations of the IGM and a semi-empirical source population, we show that the 21 cm forest dominates, in a distinctive region of Fourier space, the brightness temperature power spectrum that many contemporary experiments aim to measure. In particular, the forest dominates the diffuse emission on smaller spatial scales along the line of sight. Exploiting this separation, one may constrain the IGM thermal history, such as heating by the first X-ray sources, on large spatial scales and the absorption of radio loud active galactic nuclei on small ones.Using realistic simulations of noise and foregrounds, we show that planned instruments on the scale of the Hydrogen Epoch of Reionization Array (HERA) with a collecting area of one tenth of a square kilometer can detect the 21cm forest in this small spatial scale region with high signal to noise. We develop an analytic toy model for the signal and explore its detectability over a large range of thermal histories and potential high redshift source scenarios.

  1. COMPLETE IONIZATION OF THE NEUTRAL GAS: WHY THERE ARE SO FEW DETECTIONS OF 21 cm HYDROGEN IN HIGH-REDSHIFT RADIO GALAXIES AND QUASARS

    SciTech Connect

    Curran, S. J.; Whiting, M. T.

    2012-11-10

    From the first published z {approx}> 3 survey of 21 cm absorption within the hosts of radio galaxies and quasars, Curran et al. found an apparent dearth of cool neutral gas at high redshift. From a detailed analysis of the photometry, each object is found to have a {lambda} = 1216 A continuum luminosity in excess of L {sub 1216} {approx} 10{sup 23} W Hz{sup -1}, a critical value above which 21 cm has never been detected at any redshift. At these wavelengths, and below, hydrogen is excited above the ground state so that it cannot absorb in 21 cm. In order to apply the equation of photoionization equilibrium, we demonstrate that this critical value also applies to the ionizing ({lambda} {<=} 912 A) radiation. We use this to show, for a variety of gas density distributions, that upon placing a quasar within a galaxy of gas, there is always an ultraviolet luminosity above which all of the large-scale atomic gas is ionized. While in this state, the hydrogen cannot be detected or engage in star formation. Applying the mean ionizing photon rate of all of the sources searched, we find, using canonical values for the gas density and recombination rate coefficient, that the observed critical luminosity gives a scale length (3 kpc) similar that of the neutral hydrogen (H I) in the Milky Way, a large spiral galaxy. Thus, this simple yet physically motivated model can explain the critical luminosity (L {sub 912} {approx} L {sub 1216} {approx} 10{sup 23} W Hz{sup -1}), above which neutral gas is not detected. This indicates that the non-detection of 21 cm absorption is not due to the sensitivity limits of current radio telescopes, but rather that the lines of sight to the quasars, and probably the bulk of the host galaxies, are devoid of neutral gas.

  2. Effect of stimulus intensity level on auditory middle latency response brain maps in human adults.

    PubMed

    Tucker, D A; Dietrich, S; McPherson, D L; Salamat, M T

    2001-05-01

    Auditory middle latency response (AMLR) brain maps were obtained in 11 young adults with normal hearing. AMLR waveforms were elicited with monaural clicks presented at three stimulus intensity levels (50, 70, and 90 dB nHL). Recordings were made for right and left ear stimulus presentations. All recordings were obtained in an eyes open/awake status for each subject. Peak-to-peak amplitudes and absolute latencies of the AMLR Pa and Pb waveforms were measured at the Cz electrode site. Pa and Pb waveforms were present 100 percent of the time in response to the 90 dB nHL presentation. The prevalence of Pa and Pb to the 70 dB nHL presentation varied from 86 to 95 percent. The prevalence of Pa and Pb to the 50 dB nHL stimulus never reached 100 percent, ranging in prevalence from 77 to 68 percent. No significant ear effect was seen for amplitude or latency measures of Pa or Pb. AMLR brain maps of the voltage field distributions of Pa and Pb waveforms showed different topographic features. Scalp topography of the Pa waveform was altered by a reduction in stimulus intensity level. At 90 dB nHL, the Pa brain map showed a large positivity midline over the frontal and central scalp areas. At lower stimulus intensity levels, frontal positivity was reduced, and scalp negativity over occipital regions was increased. Pb scalp topography was also altered by a reduction in stimulus intensity level. Varying the stimulus intensity significantly altered Pa and Pb distributions of amplitude and latency measures. Pa and Pb distributions were skewed regardless of stimulus intensity.

  3. Magnitude, Location, and Ground Motion Estimates Derived From the Community Internet Intensity Maps

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Hattori, M. F.; Ebel, J. E.

    2002-12-01

    As is typical for stable continental region events, the 2002 Au Sable Forks, NY, and Evansville, IN, earthquakes had a dearth of ground motion recordings. In contrast, the USGS collected over 9,300 and 6600 Internet responses for these two events, respectively, through the Community Internet Intensity Map (CIIM) Web pages providing a valuable collection of intensity data. CIIM is an automatic system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. These intensities (CII) have been shown to be comparable to USGS Modified Mercalli Intensities (MMI). Given the CII for an event, we have developed tools to make it possible to generate ground motion estimates in the absence of data from seismic instruments. We compare both mean ground motion estimates based on the ShakeMap instrumental intensity relations with values computed from a Bayesian approach, based on combining probabilities of ground motion amplitudes for a given intensity with those for regionally-appropriate attenuation relationships. We also present a method for deriving earthquake magnitude and location automatically, updated as a function of time, from online responses based on the algorithm of Bakun and Wentworth. We perform a grid search centered on the area with the highest intensity responses, treat each node as a `trial epicenter', and determine the magnitude and intensity centroid that best fits the CII observation points according a region-dependent intensity-distance attenuation relation. We use the M4.9 2002 Gilroy, CA, event to test all these new tools since it was well recorded by strong motion instruments and had an impressive CIIM response. We show that the epicenter and ground motions determined from the CIIM data correlate well with instrumentally derived parameters. We then apply these methods to the Au Sable Forks, NY, and Evansville, IN, events. To show the

  4. Managing Hardware Configurations and Data Products for the Canadian Hydrogen Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hincks, A. D.; Shaw, J. R.; Chime Collaboration

    2015-09-01

    The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is an ambitious new radio telescope project for measuring cosmic expansion and investigating dark energy. Keeping good records of both physical configuration of its 1280 antennas and their analogue signal chains as well as the ˜100 TB of data produced daily from its correlator will be essential to the success of CHIME. In these proceedings we describe the database-driven software we have developed to manage this complexity.

  5. Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly

    DOE PAGES

    Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; ...

    2016-01-29

    Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less

  6. Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly

    SciTech Connect

    Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; Church, Sarah E.

    2016-01-29

    Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies at $z\\sim 2.4$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the ${L}_{\\mathrm{IR}}$–${L}_{\\mathrm{CO}}$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.

  7. Computing spatial correlation of ground motion intensities for ShakeMap

    NASA Astrophysics Data System (ADS)

    Verros, Sarah A.; Wald, David J.; Worden, C. Bruce; Hearne, Mike; Ganesh, Mahadevan

    2017-02-01

    Modeling the spatial correlation of ground motion residuals, caused by coherent contributions from source, path, and site, can provide valuable loss and hazard information, as well as a more realistic depiction of ground motion intensities. The U.S. Geological Survey (USGS) software package, ShakeMap, utilizes a deterministic empirical approach to estimate median ground shaking in conjunction with observed seismic data. ShakeMap-based shaking estimates are used in concert with loss estimation algorithms to estimate fatalities and economic losses after significant seismic events around the globe. Incorporating the spatial correlation of ground motion residuals has been shown to improve seismic loss estimates. In particular, Park, Bazzuro, and Baker (Applications of Statistics and Probability in Civil Engineering, 2007) investigated computing spatially correlated random fields of residuals. However, for large scale ShakeMap grids, computational requirements of the method are prohibitive. In this work, a memory efficient algorithm is developed to compute the random fields and implemented using the ShakeMap framework. This new, iterative parallel algorithm is based on decay properties of an associated ground motion correlation function and is shown to significantly reduce computational requirements associated with adding spatial variability to the ShakeMap ground motion estimates. Further, we demonstrate and quantify the impact of adding peak ground motion spatial variability on resulting earthquake loss estimates.

  8. Crop Frequency Mapping for Land Use Intensity Estimation During Three Decades

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Tindall, Dan

    2016-08-01

    Crop extent and frequency maps are an important input to inform the debate around land value and competitive land uses, food security and sustainability of agricultural practices. Such spatial datasets are likely to support decisions on natural resource management, planning and policy. The complete Landsat Time Series (LTS) archive for 23 Landsat footprints in western Queensland from 1987 to 2015 was used in a multi-temporal mapping approach. Spatial, spectral and temporal information were combined in multiple crop-modelling steps, supported by on ground training data sampled across space and time for the classes Crop and No-Crop. Temporal information within summer and winter growing seasons for each year were summarised, and combined with various vegetation indices and band ratios computed from a mid-season spectral-composite image. All available temporal information was spatially aggregated to the scale of image segments in the mid- season composite for each growing season and used to train a random forest classifier for a Crop and No- Crop classification. Validation revealed that the predictive accuracy varied by growing season and region to be within k = 0.88 to 0.97 and are thus suitable for mapping current and historic cropping activity. Crop frequency maps were produced for all regions at different time intervals. The crop frequency maps were validated separately with a historic crop information time series. Different land use intensities and conversions e.g. from agricultural to pastures are apparent and potential drivers of these conversions are discussed.

  9. The effect of non-Gaussianity on error predictions for the Epoch of Reionization (EoR) 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman; Bera, Apurba; Acharyya, Ayan

    2015-04-01

    The Epoch of Reionization (EoR) 21-cm signal is expected to become increasingly non-Gaussian as reionization proceeds. We have used seminumerical simulations to study how this affects the error predictions for the EoR 21-cm power spectrum. We expect SNR=√{N_k} for a Gaussian random field where Nk is the number of Fourier modes in each k bin. We find that non-Gaussianity is important at high SNR where it imposes an upper limit [SNR]l. For a fixed volume V, it is not possible to achieve SNR > [SNR]l even if Nk is increased. The value of [SNR]l falls as reionization proceeds, dropping from ˜500 at bar{x}_{H I} = 0.8-0.9 to ˜10 at bar{x}_{H I} = 0.15 for a [150.08 Mpc]3 simulation. We show that it is possible to interpret [SNR]l in terms of the trispectrum, and we expect [SNR]_l ∝ √{V} if the volume is increased. For SNR ≪ [SNR]l we find SNR= √{N_k}/A with A ˜ 0.95-1.75, roughly consistent with the Gaussian prediction. We present a fitting formula for the SNR as a function of Nk, with two parameters A and [SNR]l that have to be determined using simulations. Our results are relevant for predicting the sensitivity of different instruments to measure the EoR 21-cm power spectrum, which till date have been largely based on the Gaussian assumption.

  10. On using large scale correlation of the Ly-α forest and redshifted 21-cm signal to probe HI distribution during the post reionization era

    SciTech Connect

    Sarkar, Tapomoy Guha; Datta, Kanan K. E-mail: kanan.physics@presiuniv.ac.in

    2015-08-01

    We investigate the possibility of detecting the 3D cross correlation power spectrum of the Ly-α forest and HI 21 cm signal from the post reionization epoch. (The cross-correlation signal is directly dependent on the dark matter power spectrum and is sensitive to the 21-cm brightness temperature and Ly-α forest biases. These bias parameters dictate the strength of anisotropy in redshift space.) We find that the cross-correlation power spectrum can be detected using 400 hrs observation with SKA-mid (phase 1) and a futuristic BOSS like experiment with a quasar (QSO) density of 30 deg{sup −2} at a peak SNR of 15 for a single field experiment at redshift z = 2.5. on large scales using the linear bias model. We also study the possibility of constraining various bias parameters using the cross power spectrum. We find that with the same experiment 1 σ (conditional errors) on the 21-cm linear redshift space distortion parameter β{sub T} and β{sub F} corresponding to the Ly-α  forest are ∼ 2.7 % and ∼ 1.4 % respectively for 01 independent pointings of the SKA-mid (phase 1). This prediction indicates a significant improvement over existing measurements. We claim that the detection of the 3D cross correlation power spectrum will not only ascertain the cosmological origin of the signal in presence of astrophysical foregrounds but will also provide stringent constraints on large scale HI biases. This provides an independent probe towards understanding cosmological structure formation.

  11. New limits on 21 cm epoch of reionization from paper-32 consistent with an x-ray heated intergalactic medium at z = 7.7

    SciTech Connect

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, David H. E.; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2014-06-20

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK{sup 2}). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK){sup 2} for k = 0.27 h Mpc{sup –1} at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  12. Alternative Stimulation Intensities for Mapping Cortical Motor Area with Navigated TMS.

    PubMed

    Kallioniemi, Elisa; Julkunen, Petro

    2016-05-01

    Navigated transcranial magnetic stimulation (nTMS) is becoming a popular tool in pre-operative mapping of functional motor areas. The stimulation intensities used in the mapping are commonly suprathreshold intensities with respect to the patient's resting motor threshold (rMT). There is no consensus on which suprathreshold intensity should be used nor on the optimal criteria for selecting the appropriate stimulation intensity (SI). In this study, the left motor cortices of 12 right-handed volunteers (8 males, age 24-61 years) were mapped using motor evoked potentials with an SI of 110 and 120 % of rMT and with an upper threshold (UT) estimated by the Mills-Nithi algorithm. The UT was significantly lower than 120 % of rMT (p < 0.001), while no significant difference was observed between UT and 110 % of rMT (p = 0.112). The representation sizes followed a similar trend, i.e. areas computed based on UT (5.9 cm(2)) and 110 % of rMT (5.0 cm(2)) being smaller than that of 120 % of rMT (8.8 cm(2)) (p ≤ 0.001). There was no difference in representation sizes between 110 % of rMT and UT. The variance in representation size was found to be significantly lower with UT compared to 120 % of rMT (p = 0.048, uncorrected), while there was no difference between 110 % of rMT and UT or 120 % of rMT. Indications of lowest inter-individual variation in representation size were observed with UT; this is possibly due to the fact that it takes into account the individual input-output characteristics of the motor cortex. Therefore, the UT seems to be a good option for SI in motor mapping applications to outline functional motor areas with nTMS and it could potentially reduce the inter-individual variation caused by the selection of SI in motor mapping in pre-surgical applications and radiosurgery planning.

  13. The Properties of Primordial Stars and Galaxies measured from the 21-cm Global Spectrum using the Dark Ages Radio Explorer (DARE)

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Bowman, Judd D.; Bradley, Richard F.; Fialkov, Anastasia; Furlanetto, Steven R.; Jones, Dayton L.; Kasper, Justin; Loeb, Abraham; Mirocha, Jordan; Monsalve, Raul A.; Rapetti, David; Tauscher, Keith; Wollack, Edward

    2017-01-01

    DARE is a mission concept designed to observe the formation of primordial stars, black holes, and galaxies (z=11-35) by measuring their spectral effects on the redshifted 21-cm hydrogen line. The UV and X-ray radiation emitted by these first objects ionized and heated the intergalactic medium and imprinted characteristic features in the 21-cm spectrum. The 1.4 GHz signal is redshifted into the radio band 40-120 MHz. DARE will take advantage of the quietest RF environment in the inner solar system by using the Moon as a shield from human radio frequency interference and solar emissions via observations on the lunar farside. DARE’s science objectives are to determine: when the first stars turned on and their properties, when the first black holes began accreting and their masses, the reionization history of the early Universe, and if evidence exists for exotic physics in the Dark Ages such as Dark Matter decay. Wideband crossed-dipole antennas, pilot tone stablized radiometric receivers, a polarimeter, and a digital spectrometer constitute the science instrument. DARE’s radiometer is precisely calibrated with a featureless spectral response, controlled systematics, and heritage from CMB missions. Models for the instrument main beam and sidelobes, antenna reflection coefficient, gain variations, and calibrations will be validated with electromagnetic simulations, laboratory and anechoic chamber measurements, and verified on-orbit. The unique frequency structure of the 21-cm spectrum, its uniformity over large angular scales, and its unpolarized state are unlike the spectrally featureless, spatially-varying, polarized emission of the bright Galactic foreground, allowing the signal to be cleanly separated from the foreground. The 21-cm signal will be extracted in the presence of foregrounds using a Bayesian framework with a Markov Chain Monto Carlo (MCMC) numerical inference technique. The DARE data analysis pipeline enables efficient, simultaneous, and self

  14. Mapping cropland-use intensity across Europe using MODIS NDVI time series

    NASA Astrophysics Data System (ADS)

    Estel, Stephan; Kuemmerle, Tobias; Levers, Christian; Baumann, Matthias; Hostert, Patrick

    2016-02-01

    Global agricultural production will likely need to increase in the future due to population growth, changing diets, and the rising importance of bioenergy. Intensifying already existing cropland is often considered more sustainable than converting more natural areas. Unfortunately, our understanding of cropping patterns and intensity is weak, especially at broad geographic scales. We characterized and mapped cropping systems in Europe, a region containing diverse cropping systems, using four indicators: (a) cropping frequency (number of cropped years), (b) multi-cropping (number of harvests per year), (c) fallow cycles, and (d) crop duration ratio (actual time under crops) based on the MODIS Normalized Difference Vegetation Index (NDVI) time series from 2000 to 2012. Second, we used these cropping indicators and self-organizing maps to identify typical cropping systems. The resulting six clusters correspond well with other indicators of agricultural intensity (e.g., nitrogen input, yields) and reveal substantial differences in cropping intensity across Europe. Cropping intensity was highest in Germany, Poland, and the eastern European Black Earth regions, characterized by high cropping frequency, multi-cropping and a high crop duration ratio. Contrarily, we found lowest cropping intensity in eastern Europe outside the Black Earth region, characterized by longer fallow cycles. Our approach highlights how satellite image time series can help to characterize spatial patterns in cropping intensity—information that is rarely surveyed on the ground and commonly not included in agricultural statistics: our clustering approach also shows a way forward to reduce complexity when measuring multiple indicators. The four cropping indicators we used could become part of continental-scale agricultural monitoring in order to identify target regions for sustainable intensification, where trade-offs between intensification and the environmental should be explored.

  15. Intensity distribution and isoseismal maps for the Nisqually, Washington, earthquake of 28 February 2001

    USGS Publications Warehouse

    Dewey, James W.; Hopper, Margaret G.; Wald, David J.; Quitoriano, Vincent; Adams, Elizabeth R.

    2002-01-01

    We present isoseismal maps, macroseismic intensities, and community summaries of damage for the MW=6.8 Nisqually, Washington, earthquake of 28 February, 2001. For many communities, two types of macroseismic intensity are assigned, the traditional U.S. Geological Survey Modified Mercalli Intensities (USGS MMI) and a type of intensity newly introduced with this paper, the USGS Reviewed Community Internet Intensity (RCII). For most communities, the RCII is a reviewed version of the Community Internet Intensity (CII) of Wald and others (1999). For some communities, RCII is assigned from such non-CII sources as press reports, engineering reports, and field reconnaissance observations. We summarize differences between procedures used to assign RCII and USGS MMI, and we show that the two types of intensity are nonetheless very similar for the Nisqually earthquake. We do not see evidence for systematic differences between RCII and USGS MMI that would approach one intensity unit, at any level of shaking, but we document a tendency for the RCII to be slightly lower than MMI in regions of low intensity and slightly higher than MMI in regions of high intensity. The highest RCII calculated for the Nisqually earthquake is 7.6, calculated for zip code 98134, which includes the ?south of downtown? (Sodo) area of Seattle and Harbor Island. By comparison, we assigned a traditional USGS MMI 8 to the Sodo area of Seattle. In all, RCII of 6.5 and higher were assigned to 58 zip-code regions. At the lowest intensities, the Nisqually earthquake was felt over an area of approximately 350,000 square km (approximately 135,000 square miles) in Washington, Oregon, Idaho, Montana, and southern British Columbia, Canada. On the basis of macroseismic effects, we infer that shaking in the southern Puget Sound region was somewhat less for the 2001 Nisqually earthquake than for the Puget Sound earthquake of April 13, 1949, which had nearly the same hypocenter and magnitude. Allowing for differences

  16. The high-redshift star formation history from carbon-monoxide intensity maps

    NASA Astrophysics Data System (ADS)

    Breysse, Patrick C.; Kovetz, Ely D.; Kamionkowski, Marc

    2016-03-01

    We demonstrate how cosmic star formation history can be measured with one-point statistics of carbon-monoxide intensity maps. Using a P(D) analysis, the luminosity function of CO-emitting sources can be inferred from the measured one-point intensity PDF. The star formation rate density (SFRD) can then be obtained, at several redshifts, from the CO luminosity density. We study the effects of instrumental noise, line foregrounds, and target redshift, and obtain constraints on the CO luminosity density of the order of 10 per cent. We show that the SFRD uncertainty is dominated by that of the model connecting CO luminosity and star formation. For pessimistic estimates of this model uncertainty, we obtain an error of the order of 50 per cent on SFRD for surveys targeting redshifts between two and seven with reasonable noise and foregrounds included. However, comparisons between intensity maps and galaxies could substantially reduce this model uncertainty. In this case, our constraints on SFRD at these redshifts improve to roughly 5 - 10 per cent, which is highly competitive with current measurements.

  17. Using ShakeMap to Map MMI Intensity for the 1868 Hayward, the 1898 Mendocino, and the 1906 San Francisco Earthquakes

    NASA Astrophysics Data System (ADS)

    Bundock, H.; Boatwright, J.

    2007-12-01

    The utility of the ShakeMap format for depicting ground motion and shaking intensity in recent well-recorded earthquakes has led to its use in mapping MMI intensity for large historic earthquakes. But while the mechanics of incorporating intensity sites as ground motion data in ShakeMap are trivial, the constraints of ShakeMap's native MMI scale and the hazards of ShakeMap's interpolation scheme can make implementation for historic earthquakes difficult. We illustrate these difficulties using the MMI ShakeMaps for the 1868 Hayward, the 1898 Mendocino, and the 1906 San Francisco earthquakes as case studies. To compare historic earthquakes with modern earthquakes, it is critical for ShakeMap to be fixed to a single intensity scale. Wald et al. (1997) base ShakeMap intensities on Stover and Coffman's (1993) revision of the MMI scale. Thus, intensities determined by Lawson (1908) and Toppozada et al. (1981) for these historic earthquakes must be re-evaluated before they can be incorporated in ShakeMap. To interpolate intensities in areas without stations, ShakeMap interposes "phantom" stations where the ground motions are estimated from Boore et al.'s (1997) attenuation relations. This hybrid interpolation leads to two problems. First, the ground motions are only regressed for a restricted distance from the fault (rJB ≤ 70 km, where rJB is the Joyner-Boore distance). To extend these relations to rJB ≤ 300 km, we modify these ground motion predictions using the exponential falloff exp(-0.0035 rJB). Surprisingly, this term fits the variation of intensity with distance for all three earthquakes. Second, the source strength can vary significantly along the fault for large earthquakes: interposing an "average" value often produces an artificial variation of the intensity. We damp these variations by minimizing the number of phantom stations, increasing both the threshold distance for the phantom stations and the total number of intensity sites in these areas by using

  18. Dynamic T2-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    NASA Astrophysics Data System (ADS)

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-01

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1°C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T2, since T2 increases linearly in fat during heating. T2-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T2. Calibration of T2-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T2 and temperature with a thermocouple. A positive T2 temperature dependence in bone marrow of 20 ms/°C was observed. Dynamic T2-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  19. Cosmic Structure and Galaxy Evolution through Intensity Mapping of Molecular Gas

    NASA Astrophysics Data System (ADS)

    Bower, Geoffrey C.; Keating, Garrett K.; Marrone, Daniel P.; YT Lee Array Team, SZA Team

    2016-01-01

    The origin and evolution of structure in the Universe is one of the major challenges of observational astronomy. How does baryonic structure trace the underlying dark matter? How have galaxies evolved to produce the present day Universe? A multi-wavelength, multi-tool approach is necessary to provide the complete story of the evolution of structure in the Universe. Intensity mapping, which relies on the ability to detect many objects at once through their integrated emission rather than direct detection of individual objects, is a critical part of this mosaic. In particular, our understanding of the molecular gas component of massive galaxies is being revolutionized by ALMA and EVLA but the population of smaller, star-forming galaxies, which provide the bulk of star formation cannot be individually probed by these instruments.In this talk, I will summarize two intensity mapping experiments to detect molecular gas through the carbon monoxide (CO) rotational transition. We have completed sensitive observations with the Sunyaev-Zel'dovic Array (SZA) telescope at a wavelength of 1 cm that are sensitive to emission at redshifts 2.3 to 3.3. The SZA experiments sets strong limits on models for the CO emission and demonstrates the ability to reject foregrounds and telescope systematics in very deep integrations. I also describe the development of an intensity mapping capability for the Y.T. Lee Array, a 13-element interferometer located on Mauna Loa. In its first phase, this project focuses on detection of CO at redshifts 2.4 - 3.0 with detection via power spectrum and cross-correlation with other surveys. The project includes a major technical upgrade, a new digital correlator and IF electronics component to be deployed in 2015/2016. The Y.T. Lee Array observations will be more sensitive and extend to larger angular scales than the SZA observations.

  20. Coastal and estuarine habitat mapping, using LIDAR height and intensity and multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Chust, Guillem; Galparsoro, Ibon; Borja, Ángel; Franco, Javier; Uriarte, Adolfo

    2008-07-01

    The airborne laser scanning LIDAR (LIght Detection And Ranging) provides high-resolution Digital Terrain Models (DTM) that have been applied recently to the characterization, quantification and monitoring of coastal environments. This study assesses the contribution of LIDAR altimetry and intensity data, topographically-derived features (slope and aspect), and multi-spectral imagery (three visible and a near-infrared band), to map coastal habitats in the Bidasoa estuary and its adjacent coastal area (Basque Country, northern Spain). The performance of high-resolution data sources was individually and jointly tested, with the maximum likelihood algorithm classifier in a rocky shore and a wetland zone; thus, including some of the most extended Cantabrian Sea littoral habitats, within the Bay of Biscay. The results show that reliability of coastal habitat classification was more enhanced with LIDAR-based DTM, compared with the other data sources: slope, aspect, intensity or near-infrared band. The addition of the DTM, to the three visible bands, produced gains of between 10% and 27% in the agreement measures, between the mapped and validation data (i.e. mean producer's and user's accuracy) for the two test sites. Raw LIDAR intensity images are only of limited value here, since they appeared heterogeneous and speckled. However, the enhanced Lee smoothing filter, applied to the LIDAR intensity, improved the overall accuracy measurements of the habitat classification, especially in the wetland zone; here, there were gains up to 7.9% in mean producer's and 11.6% in mean user's accuracy. This suggests that LIDAR can be useful for habitat mapping, when few data sources are available. The synergy between the LIDAR data, with multi-spectral bands, produced high accurate classifications (mean producer's accuracy: 92% for the 16 rocky habitats and 88% for the 11 wetland habitats). Fusion of the data enabled discrimination of intertidal communities, such as Corallina elongata

  1. Fisher Matrix-based Predictions for Measuring the z = 3.35 Binned 21-cm Power Spectrum using the Ooty Wide Field Array (OWFA)

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Ali, Sk. Saiyad

    2017-03-01

    We use the Fisher matrix formalism to predict the prospects of measuring the redshifted 21-cm power spectrum in different k-bins using observations with the upcoming Ooty Wide Field Array (OWFA) which will operate at 326.5 MHz. This corresponds to neutral hydrogen (HI) at z = 3.35, and a measurement of the 21-cm power spectrum provides a unique method to probe the large-scale structures at this redshift. Our analysis indicates that a 5 σ detection of the binned power spectrum is possible in the k range 0.05 ≤ k ≤ 0.3 Mpc-1 with 1000 hours of observation. We find that the signal- to-noise ratio (SNR) peaks in the k range 0.1-0.2 Mpc-1 where a 10 σ detection is possible with 2000 hours of observations. Our analysis also indicates that it is not very advantageous to observe beyond 1000 h in a single field-of-view as the SNR increases rather slowly beyond this in many of the small k-bins. The entire analysis reported here assumes that the foregrounds have been completely removed.

  2. Expected constraints on models of the epoch of reionization with the variance and skewness in redshifted 21 cm-line fluctuations

    NASA Astrophysics Data System (ADS)

    Kubota, Kenji; Yoshiura, Shintaro; Shimabukuro, Hayato; Takahashi, Keitaro

    2016-08-01

    The redshifted 21 cm-line signal from neutral hydrogen in the intergalactic medium (IGM) gives a direct probe of the epoch of reionization (EoR). In this paper, we investigate the potential of the variance and skewness of the probability distribution function of the 21 cm brightness temperature for constraining EoR models. These statistical quantities are simple, easy to calculate from the observed visibility, and thus suitable for the early exploration of the EoR with current telescopes such as the Murchison Widefield Array (MWA) and LOw Frequency ARray (LOFAR). We show, by performing Fisher analysis, that the variance and skewness at z = 7-9 are complementary to each other to constrain the EoR model parameters such as the minimum virial temperature of halos which host luminous objects, ionizing efficiency, and mean free path of ionizing photons in the IGM. Quantitatively, the constraining power highly depends on the quality of the foreground subtraction and calibration. We give a best case estimate of the constraints on the parameters, neglecting the systematics other than the thermal noise.

  3. Squidpops: A Simple Tool to Crowdsource a Global Map of Marine Predation Intensity

    PubMed Central

    Duffy, J. Emmett; Ziegler, Shelby L.; Campbell, Justin E.; Bippus, Paige M.; Lefcheck, Jonathan S.

    2015-01-01

    We present a simple, standardized assay, the squidpop, for measuring the relative feeding intensity of generalist predators in aquatic systems. The assay consists of a 1.3-cm diameter disk of dried squid mantle tethered to a rod, which is either inserted in the sediment in soft-bottom habitats or secured to existing structure. Each replicate squidpop is scored as present or absent after 1 and 24 hours, and the data for analysis are proportions of replicate units consumed at each time. Tests in several habitats of the temperate southeastern USA (Virginia and North Carolina) and tropical Central America (Belize) confirmed the assay’s utility for measuring variation in predation intensity among habitats, among seasons, and along environmental gradients. In Belize, predation intensity varied strongly among habitats, with reef > seagrass = mangrove > unvegetated bare sand. Quantitative visual surveys confirmed that assayed feeding intensity increased with abundance and species richness of fishes across sites, with fish abundance and richness explaining up to 45% and 70% of the variation in bait loss respectively. In the southeastern USA, predation intensity varied seasonally, being highest during summer and declining in late autumn. Deployments in marsh habitats generally revealed a decline in mean predation intensity from fully marine to tidal freshwater sites. The simplicity, economy, and standardization of the squidpop assay should facilitate engagement of scientists and citizens alike, with the goal of constructing high-resolution maps of how top-down control varies through space and time in aquatic ecosystems, and addressing a broad array of long-standing hypotheses in macro- and community ecology. PMID:26599815

  4. 3D leaf water content mapping using terrestrial laser scanner backscatter intensity with radiometric correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Wang, Tiejun; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Niemann, K. Olaf

    2015-12-01

    Leaf water content (LWC) plays an important role in agriculture and forestry management. It can be used to assess drought conditions and wildfire susceptibility. Terrestrial laser scanner (TLS) data have been widely used in forested environments for retrieving geometrically-based biophysical parameters. Recent studies have also shown the potential of using radiometric information (backscatter intensity) for estimating LWC. However, the usefulness of backscatter intensity data has been limited by leaf surface characteristics, and incidence angle effects. To explore the idea of using LiDAR intensity data to assess LWC we normalized (for both angular effects and leaf surface properties) shortwave infrared TLS data (1550 nm). A reflectance model describing both diffuse and specular reflectance was applied to remove strong specular backscatter intensity at a perpendicular angle. Leaves with different surface properties were collected from eight broadleaf plant species for modeling the relationship between LWC and backscatter intensity. Reference reflectors (Spectralon from Labsphere, Inc.) were used to build a look-up table to compensate for incidence angle effects. Results showed that before removing the specular influences, there was no significant correlation (R2 = 0.01, P > 0.05) between the backscatter intensity at a perpendicular angle and LWC. After the removal of the specular influences, a significant correlation emerged (R2 = 0.74, P < 0.05). The agreement between measured and TLS-derived LWC demonstrated a significant reduction of RMSE (root mean square error, from 0.008 to 0.003 g/cm2) after correcting for the incidence angle effect. We show that it is possible to use TLS to estimate LWC for selected broadleaved plants with an R2 of 0.76 (significance level α = 0.05) at leaf level. Further investigations of leaf surface and internal structure will likely result in improvements of 3D LWC mapping for studying physiology and ecology in vegetation.

  5. Inverse identification of intensity distributions from multiple flux maps in concentrating solar applications

    NASA Astrophysics Data System (ADS)

    Erickson, Ben; Petrasch, Jörg

    2012-06-01

    Radiative flux measurements at the focal plane of solar concentrators are typically performed using digital cameras in conjunction with Lambertian targets. To accurately predict flux distributions on arbitrary receiver geometries directional information about the radiation is required. Currently, the directional characteristics of solar concentrating systems are predicted via ray tracing simulations. No direct experimental technique to determine intensities of concentrating solar systems is available. In the current paper, multiple parallel flux measurements at varying distances from the focal plane together with a linear inverse method and Tikhonov regularization are used to identify the directional and spatial intensity distribution at the solution plane. The directional binning feature of an in-house Monte Carlo ray tracing program is used to provide a reference solution. The method has been successfully applied to two-dimensional concentrators, namely parabolic troughs and elliptical troughs using forward Monte Carlo ray tracing simulations that provide the flux maps as well as consistent, associated intensity distribution for validation. In the two-dimensional case, intensity distributions obtained from the inverse method approach the Monte Carlo forward solution. In contrast, the method has not been successful for three dimensional and circular symmetric concentrator geometries.

  6. Measuring Galaxy Clustering and the Evolution of [C II] Mean Intensity with Far-IR Line Intensity Mapping during 0.5 < z < 1.5

    NASA Astrophysics Data System (ADS)

    Uzgil, Bade; Aguirre, James E.; Bradford, Charles; Lidz, Adam

    2016-01-01

    Infrared fine-structure emission lines from trace metals are powerful diagnostics of the interstellar medium in galaxies. We explore the possibility of studying the redshifted far-IR fine-structure line emission using the three-dimensional (3D) power spectra obtained with an imaging spectrometer. The intensity mapping approach measures the spatio-spectral fluctuations due to line emission from all galaxies, including those below the individual detection threshold. The technique provides 3D measurements of galaxy clustering and moments of the galaxy luminosity function. Furthermore, the linear portion of the power spectrum can be used to measure the total line emission intensity including all sources through cosmic time with redshift information naturally encoded. As a case study, we consider measurement of [C II] autocorrelation in the 0.5 < z < 1.5 epoch, where interloper lines are minimized, using far-IR/submillimeter balloon-borne and future space-borne instruments with moderate and high sensitivity, respectively. In this context, we compare the intensity mapping approach to blind galaxy surveys based on individual detections. We find that intensity mapping is nearly always the best way to obtain the total line emission because blind, wide-field galaxy surveys lack sufficient depth and deep pencil beams do not observe enough galaxies in the requisite luminosity and redshift bins. Also, intensity mapping is often the most efficient way to measure the power spectrum shape, depending on the details of the luminosity function and the telescope aperture.

  7. A LiDAR intensity correction model for vertical geological mapping

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Humair, Florian; Matasci, Battista; Abellan, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2015-04-01

    Ground-based LiDAR has been traditionally used for surveying purposes via 3D point clouds. In addition to XYZ coordinates, an intensity attribute is also recorded by the LiDAR devices but this parameter is rarely used for geological applications. The intensity of the backscattered signal can be a significant source of information in different geological applications, such as geological remote mapping of vertical surfaces, mineral exploration, stratigraphy, engineering, etc. However, the Intensity value recorded by the LiDAR is a function of several external parameters, thus a correction of the raw intensity information is required prior to make use of this parameter. This study proposes an intensity correction model which takes into account of the range, the incidence angle and the surface roughness based on Oren-Nayar reflectance model (Oren and Nayar, 1994). The Oren-Nayar reflectance model is based on the idea that a surface is composed of micro-facets of various slope angles. The simplified version of this model requires only one parameter to characterize a surface, the standard deviation of the slope angle of the facets. Different discrete pulse laser scanners of Optech's ILRIS category were used to understand how the back-scattered intensity evolves in function of range and incidence angle. This was performed by carrying out different indoor and outdoor experiments, using the following targets: 1) mobile 2m2 board covered by black/white paper, 2) white plaster corridor walls and 3) finally on natural outcrops. First of all, we carried out a simple experiment by placing the mobile board at different distances ranging from 10 to 1000 meters. The analysis of the datasets revealed that the intensity of the backscattered signal decreases with the square of the range to the target, as was expected. However, both for the wall and the natural outcrops, the influence of the incidence angle appears to be more complex than the classical cosine law due to the roughness

  8. Intensity mapping cross-correlations: connecting the largest scales to galaxy evolution

    NASA Astrophysics Data System (ADS)

    Wolz, L.; Tonini, C.; Blake, C.; Wyithe, J. S. B.

    2016-05-01

    Intensity mapping of the neutral hydrogen (H I) is a new observational tool to efficiently map the large-scale structure over wide redshift ranges. The cross-correlation of intensity maps with galaxy surveys is a robust measure of the cosmological power spectrum and the H I content of galaxies which diminishes systematics caused by instrumental effects and foreground removal. We examine the cross-correlation signature at redshift 0.9 using a semi-analytical galaxy formation model in order to model the H I gas of galaxies as well as their optical magnitudes. We determine the scale-dependent clustering of the cross-correlation power for different types of galaxies determined by their colours, which act as a proxy for their star formation activity. We find that the cross-correlation coefficient with H I density for red quiescent galaxies falls off more quickly on smaller scales k > 0.2 h Mpc-1 than for blue star-forming galaxies. Additionally, we create a mock catalogue of highly star-forming galaxies to mimic the WiggleZ Dark Energy Survey, and use this to predict existing and future measurements using data from the Green Bank telescope and Parkes telescope. We find that the cross-power of highly star-forming galaxies shows a higher clustering on small scales than any other galaxy type and that this significantly alters the power spectrum shape on scales k > 0.2 h Mpc-1. We show that the cross-correlation coefficient is not negligible when interpreting the cosmological cross-power spectrum and additionally contains information about the H I content of the optically selected galaxies.

  9. Three-dimensional Hydrodynamic Simulations of Multiphase Galactic Disks with Star Formation Feedback. II. Synthetic H I 21 cm Line Observations

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Goo; Ostriker, Eve C.; Kim, Woong-Tae

    2014-05-01

    We use three-dimensional numerical hydrodynamic simulations of the turbulent, multiphase atomic interstellar medium (ISM) to construct and analyze synthetic H I 21 cm emission and absorption lines. Our analysis provides detailed tests of 21 cm observables as physical diagnostics of the atomic ISM. In particular, we construct (1) the "observed" spin temperature, T_{s, obs}(v_ch)≡ T_B(v_ch)/[1-e^{-τ (v_ch)}], and its optical-depth weighted mean T s, obs; (2) the absorption-corrected "observed" column density, N_H,obs∝ ∫ dv_chT_B(v_ch) τ (v_ch)/[1-e^{-τ (v_ch)}]; and (3) the "observed" fraction of cold neutral medium (CNM), f c, obs ≡ Tc /T s, obs for Tc the CNM temperature; we compare each observed parameter with true values obtained from line-of-sight (LOS) averages in the simulation. Within individual velocity channels, T s, obs(v ch) is within a factor 1.5 of the true value up to τ(v ch) ~ 10. As a consequence, N H, obs and T s, obs are, respectively, within 5% and 12% of the true values for 90% and 99% of LOSs. The optically thin approximation significantly underestimates N H for τ > 1. Provided that Tc is constrained, an accurate observational estimate of the CNM mass fraction can be obtained down to 20%. We show that T s, obs cannot be used to distinguish the relative proportions of warm and thermally unstable atomic gas, although the presence of thermally unstable gas can be discerned from 21 cm lines with 200 K <~ T s, obs(v ch) <~ 1000 K. Our mock observations successfully reproduce and explain the observed distribution of the brightness temperature, optical depth, and spin temperature in Roy et al. The threshold column density for CNM seen in observations is also reproduced by our mock observations. We explain this observed threshold behavior in terms of vertical equilibrium in the local Milky Way's ISM disk.

  10. When intensions do not map onto extensions: Individual differences in conceptualization.

    PubMed

    Hampton, James A; Passanisi, Alessia

    2016-04-01

    Concepts are represented in the mind through knowledge of their extensions (the class of items to which the concept applies) and intensions (features that distinguish that class of items). A common assumption among theories of concepts is that the 2 aspects are intimately related. Hence if there is systematic individual variation in concept representation, the variation should correlate between extensional and intensional measures. A pair of individuals with similar extensional beliefs about a given concept should also share similar intensional beliefs. To test this notion, exemplars (extensions) and features (intensions) of common categories were rated for typicality and importance respectively across 2 occasions. Within-subject consistency was greater than between-subjects consensus on each task, providing evidence for systematic individual variation. Furthermore, the similarity structure between individuals for each task was stable across occasions. However, across 5 samples, similarity between individuals for extensional judgments did not map onto similarity between individuals for intensional judgments. The results challenge the assumption common to many theories of conceptual representation that intensions determine extensions and support a hybrid view of concepts where there is a disconnection between the conceptual resources that are used for the 2 tasks.

  11. Measuring galaxy clustering and the evolution of [C II] mean intensity with far-IR line intensity mapping during 0.5 < z < 1.5

    SciTech Connect

    Uzgil, B. D.; Aguirre, J. E.; Lidz, A.; Bradford, C. M.

    2014-10-01

    Infrared fine-structure emission lines from trace metals are powerful diagnostics of the interstellar medium in galaxies. We explore the possibility of studying the redshifted far-IR fine-structure line emission using the three-dimensional (3D) power spectra obtained with an imaging spectrometer. The intensity mapping approach measures the spatio-spectral fluctuations due to line emission from all galaxies, including those below the individual detection threshold. The technique provides 3D measurements of galaxy clustering and moments of the galaxy luminosity function. Furthermore, the linear portion of the power spectrum can be used to measure the total line emission intensity including all sources through cosmic time with redshift information naturally encoded. Total line emission, when compared to the total star formation activity and/or other line intensities, reveals evolution of the interstellar conditions of galaxies in aggregate. As a case study, we consider measurement of [C II] autocorrelation in the 0.5 < z < 1.5 epoch, where interloper lines are minimized, using far-IR/submillimeter balloon-borne and future space-borne instruments with moderate and high sensitivity, respectively. In this context, we compare the intensity mapping approach to blind galaxy surveys based on individual detections. We find that intensity mapping is nearly always the best way to obtain the total line emission because blind, wide-field galaxy surveys lack sufficient depth and deep pencil beams do not observe enough galaxies in the requisite luminosity and redshift bins. Also, intensity mapping is often the most efficient way to measure the power spectrum shape, depending on the details of the luminosity function and the telescope aperture.

  12. Dynamics of Hollow Atom Formation in Intense X-Ray Pulses Probed by Partial Covariance Mapping

    NASA Astrophysics Data System (ADS)

    Frasinski, L. J.; Zhaunerchyk, V.; Mucke, M.; Squibb, R. J.; Siano, M.; Eland, J. H. D.; Linusson, P.; v. d. Meulen, P.; Salén, P.; Thomas, R. D.; Larsson, M.; Foucar, L.; Ullrich, J.; Motomura, K.; Mondal, S.; Ueda, K.; Osipov, T.; Fang, L.; Murphy, B. F.; Berrah, N.; Bostedt, C.; Bozek, J. D.; Schorb, S.; Messerschmidt, M.; Glownia, J. M.; Cryan, J. P.; Coffee, R. N.; Takahashi, O.; Wada, S.; Piancastelli, M. N.; Richter, R.; Prince, K. C.; Feifel, R.

    2013-08-01

    When exposed to ultraintense x-radiation sources such as free electron lasers (FELs) the innermost electronic shell can efficiently be emptied, creating a transient hollow atom or molecule. Understanding the femtosecond dynamics of such systems is fundamental to achieving atomic resolution in flash diffraction imaging of noncrystallized complex biological samples. We demonstrate the capacity of a correlation method called “partial covariance mapping” to probe the electron dynamics of neon atoms exposed to intense 8 fs pulses of 1062 eV photons. A complete picture of ionization processes competing in hollow atom formation and decay is visualized with unprecedented ease and the map reveals hitherto unobserved nonlinear sequences of photoionization and Auger events. The technique is particularly well suited to the high counting rate inherent in FEL experiments.

  13. Dissecting the High-z Interstellar Medium through Intensity Mapping Cross-correlations

    NASA Astrophysics Data System (ADS)

    Serra, Paolo; Doré, Olivier; Lagache, Guilaine

    2016-12-01

    We explore the detection, with upcoming spectroscopic surveys, of three-dimensional power spectra of emission line fluctuations produced in different phases of the interstellar medium (ISM) by forbidden transitions of ionized carbon [C ii] (157.7 μm), ionized nitrogen [N ii] (121.9 and 205.2 μm), and neutral oxygen [O i] (145.5 μm) at redshift z > 4. These lines are important coolants of both the neutral and the ionized medium, and probe multiple phases of the ISM. In the framework of the halo model, we compute predictions of the three-dimensional power spectra for two different surveys, showing that they have the required sensitivity to detect cross-power spectra between the [C ii] line and both the [O i] line and the [N ii] lines with sufficient signal-to-noise ratio. The importance of cross-correlating multiple lines with the intensity mapping technique is twofold. On the one hand, we will have multiple probes of the different phases of the ISM, which is key to understanding the interplay between energetic sources, and the gas and dust at high redshift. This kind of study will be useful for a next-generation space observatory such as the NASA Far-IR Surveyor, which will probe the global star formation and the ISM of galaxies from the peak of star formation to the epoch of reionization. On the other hand, emission lines from external galaxies are an important foreground when measuring spectral distortions of the cosmic microwave background spectrum with future space-based experiments like PIXIE; measuring fluctuations in the intensity mapping regime will help constrain the mean amplitude of these lines, and will allow us to better handle this important foreground.

  14. Comparing USGS national seismic hazard maps with internet-based macroseismic intensity observations

    NASA Astrophysics Data System (ADS)

    Mak, Sum; Schorlemmer, Danijel

    2016-04-01

    Verifying a nationwide seismic hazard assessment using data collected after the assessment has been made (i.e., prospective data) is a direct consistency check of the assessment. We directly compared the predicted rate of ground motion exceedance by the four available versions of the USGS national seismic hazard map (NSHMP, 1996, 2002, 2008, 2014) with the actual observed rate during 2000-2013. The data were prospective to the two earlier versions of NSHMP. We used two sets of somewhat independent data, namely 1) the USGS "Did You Feel It?" (DYFI) intensity reports, 2) instrumental ground motion records extracted from ShakeMap stations. Although both are observed data, they come in different degrees of accuracy. Our results indicated that for California, the predicted and observed hazards were very comparable. The two sets of data gave consistent results, implying robustness. The consistency also encourages the use of DYFI data for hazard verification in the Central and Eastern US (CEUS), where instrumental records are lacking. The result showed that the observed ground-motion exceedance was also consistent with the predicted in CEUS. The primary value of this study is to demonstrate the usefulness of DYFI data, originally designed for community communication instead of scientific analysis, for the purpose of hazard verification.

  15. The use of multibeam backscatter intensity data as a tool for mapping glacial deposits in the Central North Sea, UK

    NASA Astrophysics Data System (ADS)

    Stewart, Heather; Bradwell, Tom

    2014-05-01

    Multibeam backscatter intensity data acquired offshore eastern Scotland and north-eastern England have been used to map drumlin fields, large arcuate moraine ridges, smaller scale moraine ridges, and incised channels on the sea floor. The study area includes the catchments of the previously proposed, but only partly mapped, Strathmore, Forth-Tay, and Tweed palaeo-ice streams. The ice sheet glacial landsystem is extremely well preserved on the sea bed and comprehensive mapping of the seafloor geomorphology has been undertaken. The authors demonstrate the value in utilising not only digital terrain models (both NEXTMap and multibeam bathymetry derived) in undertaking geomorphological mapping, but also examining the backscatter intensity data that is often overlooked. Backscatter intensity maps were generated using FM Geocoder by the British Geological Survey. FM Geocoder corrects the backscatter intensities registered by the multibeam echosounder system, and then geometrically corrects and positions each acoustic sample in a backscatter mosaic. The backscatter intensity data were gridded at the best resolution per dataset (between 2 and 5 m). The strength of the backscattering is dependent upon sediment type, grain size, survey conditions, sea-bed roughness, compaction and slope. A combination of manual interpretation and semi-automated classification of the backscatter intensity data (a predictive method for mapping variations in surficial sea-bed sediments) has been undertaken in the study area. The combination of the two methodologies has produced a robust glacial geomorphological map for the study area. Four separate drumlin fields have been mapped in the study area indicative of fast-flowing and persistent ice-sheet flow configurations. A number of individual drumlins are also identified located outside the fields. The drumlins show as areas of high backscatter intensity compared to the surrounding sea bed, indicating the drumlins comprise mixed sediments of

  16. Predicting the intensity mapping signal for multi-J CO lines

    SciTech Connect

    Mashian, Natalie; Loeb, Abraham; Sternberg, Amiel E-mail: amiel@wise.tau.ac.il

    2015-11-01

    We present a novel approach to estimating the intensity mapping signal of any CO rotational line emitted during the Epoch of Reionization (EoR). Our approach is based on large velocity gradient (LVG) modeling, a radiative transfer modeling technique that generates the full CO spectral line energy distribution (SLED) for a specified gas kinetic temperature, volume density, velocity gradient, molecular abundance, and column density. These parameters, which drive the physics of CO transitions and ultimately dictate the shape and amplitude of the CO SLED, can be linked to the global properties of the host galaxy, mainly the star formation rate (SFR) and the SFR surface density. By further employing an empirically derived SFR−M relation for high redshift galaxies, we can express the LVG parameters, and thus the specific intensity of any CO rotational transition, as functions of the host halo mass M and redshift z. Integrating over the range of halo masses expected to host CO-luminous galaxies, we predict a mean CO(1-0) brightness temperature ranging from ∼ 0.6 μK at z = 6 to ∼ 0.03 μK at z = 10 with brightness temperature fluctuations of Δ{sub CO}{sup 2} ∼ 0.1 and 0.005 μK respectively, at k = 0.1 Mpc{sup −1}. In this model, the CO emission signal remains strong for higher rotational levels at z = 6, with ( T{sub CO} ) ∼ 0.3 and 0.05 μK for the CO J = 6arrow5 and CO J = 10arrow9 transitions respectively. Including the effects of CO photodissociation in these molecular clouds, especially at low metallicities, results in the overall reduction in the amplitude of the CO signal, with the low- and high-J lines weakening by 2–20% and 10–45%, respectively, over the redshift range 4 < z < 10.

  17. Clustering of quintessence on horizon scales and its imprint on HI intensity mapping

    NASA Astrophysics Data System (ADS)

    Duniya, Didam G. A.; Bertacca, Daniele; Maartens, Roy

    2013-10-01

    Quintessence can cluster only on horizon scales. What is the effect on the observed matter distribution? To answer this, we need a relativistic approach that goes beyond the standard Newtonian calculation and deals properly with large scales. Such an approach has recently been developed for the case when dark energy is vacuum energy, which does not cluster at all. We extend this relativistic analysis to deal with dynamical dark energy. Using three quintessence potentials as examples, we compute the angular power spectrum for the case of an HI intensity map survey. Compared to the concordance model with the same small-scale power at z = 0, quintessence boosts the angular power by up to ~ 15% at high redshifts, while power in the two models converges at low redshifts. The difference is mainly due to the background evolution, driven mostly by the normalization of the power spectrum today. The dark energy perturbations make only a small contribution on the largest scales, and a negligible contribution on smaller scales. Ironically, the dark energy perturbations remove the false boost of large-scale power that arises if we impose the (unphysical) assumption that the dark energy is smooth.

  18. Clustering of quintessence on horizon scales and its imprint on HI intensity mapping

    SciTech Connect

    Duniya, Didam G.A.; Bertacca, Daniele; Maartens, Roy E-mail: daniele.bertacca@gmail.com

    2013-10-01

    Quintessence can cluster only on horizon scales. What is the effect on the observed matter distribution? To answer this, we need a relativistic approach that goes beyond the standard Newtonian calculation and deals properly with large scales. Such an approach has recently been developed for the case when dark energy is vacuum energy, which does not cluster at all. We extend this relativistic analysis to deal with dynamical dark energy. Using three quintessence potentials as examples, we compute the angular power spectrum for the case of an HI intensity map survey. Compared to the concordance model with the same small-scale power at z = 0, quintessence boosts the angular power by up to ∼ 15% at high redshifts, while power in the two models converges at low redshifts. The difference is mainly due to the background evolution, driven mostly by the normalization of the power spectrum today. The dark energy perturbations make only a small contribution on the largest scales, and a negligible contribution on smaller scales. Ironically, the dark energy perturbations remove the false boost of large-scale power that arises if we impose the (unphysical) assumption that the dark energy is smooth.

  19. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning.

    PubMed

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-11-07

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (> 95%), target homogeneity (< 10% overdosing and < 7% underdosing) and organ sparing using at least one of the two models.

  20. Global statistical maps of extreme-event magnetic observatory 1 min first differences in horizontal intensity

    USGS Publications Warehouse

    Love, Jeffrey J.; Coisson, Pierdavide; Pulkkinen, Antti

    2016-01-01

    Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity ΔBh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold (nT/min) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55°, we estimate once-per-century ΔBh, S, and R exceedances and a site-to-site, proportional, 1 standard deviation range [1 σ, lower and upper] to be, respectively, 1000, [690, 1450]; 500, [350, 720]; and 200, [140, 280] nT/min. At 40°, we estimate once-per-century ΔBh, S, and R exceedances and 1 σ values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nT/min.

  1. Global Statistical Maps of Extreme-Event Magnetic Observatory 1 Min First Differences in Horizontal Intensity

    NASA Technical Reports Server (NTRS)

    Love, Jeffrey J.; Coïsson, Pierdavide; Pulkkinen, Antti

    2016-01-01

    Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity (delta)Bh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold(nTmin) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55deg, we estimate once-per-century (delta)Bh, S, and R exceedances and a site-to-site,proportional, 1 standard deviation range [1(sigma), lower and upper] to be, respectively, 1000, [690, 1450]; 500,[350, 720]; and 200, [140, 280] nTmin. At 40deg, we estimate once-per-century (delta)Bh, S, and R exceedances and1(sigma) values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nTmin.

  2. Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2016-08-01

    We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100. The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} mathrm {W}/mathrm {Hz}^{1/2}. This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.

  3. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, Alan M.; Edwards, William R.

    1983-01-01

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  4. Five Years of Citizen Science: Macroseismic Data Collection with the USGS Community Internet Intensity Maps (``Did You Feel It?'')

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Dewey, J. W.; Hopper, M.; Tarr, A.

    2003-12-01

    The U.S. Geological Survey Community Internet Intensity Map (CIIM) is an automatic Web-based system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. The data collection procedure is fundamentally Citizen Science. The vast majority of data are contributed by non-specialists, describing their own experiences of earthquakes. Internet data contributed by the public have profoundly changed the approach, coverage and usefulness of intensity observation in the U.S. We now typically receive thousands of individual questionnaire responses for widely felt earthquakes. After five years, these total over 350,000 individual entries nationwide, including entries from all 50 States, the District of Columbia, as well as territories of Guam, the Virgin Islands and Puerto Rico. The widespread access and use of online felt reports have added unanticipated but welcome capacities to USGS earthquake reporting. We can more easily validate earthquake occurrence in poorly instrumented regions, identify and locate sonic booms, and readily gauge societal importance of earthquakes by the nature of the response. In some parts of the U.S., CIIM provides constraints on earthquake magnitudes and focal depths beyond those provided by instrumental data, and the data are robust enough to test regionalized models of ground-motion attenuation. CIIM invokes an enthusiastic response from members of the public who contribute to it; it clearly provides an important opportunity for public education and outreach. In this paper we provide background on advantages and limitations of on-line data collection and explore recent developments and improvements to the CIIM system, including improved quality assurance using a relational database and greater data availability for scientific and sociological studies. We also describe a number of post-processing tools and applications that make use

  5. Pressure pain mapping of the wrist extensors after repeated eccentric exercise at high intensity.

    PubMed

    Delfa de la Morena, José M; Samani, Afshin; Fernández-Carnero, Josué; Hansen, Ernst A; Madeleine, Pascal

    2013-11-01

    The purpose of this study was to investigate adaptation mechanisms after 2 test rounds consisting of eccentric exercise using pressure pain imaging of the wrist extensors. Pressure pain thresholds (PPTs) were assessed over 12 points forming a 3 × 4 matrix over the dominant elbow in 12 participants. From the PPT assessments, pressure pain maps were computed. Delayed onset muscle soreness was induced in an initial test round of high-intensity eccentric exercise. The second test round performed 7 days later aimed at resulting in adaptation. The PPTs were assessed before, immediately after, and 24 hours after the 2 test rounds of eccentric exercise. For the first test round, the mean PPT was significantly lower 24 hours after exercise compared with before exercise (389.5 ± 64.1 vs. 500.5 ± 66.4 kPa, respectively; p = 0.02). For the second test round, the PPT was similar before and 24 hours after (447.7 ± 51.3 vs. 458.0 ± 73.1 kPa, respectively; p = 1.0). This study demonstrated adaptive effects of the wrist extensors monitored by pain imaging technique in healthy untrained humans. A lack of hyperalgesia, i.e., no decrease in PPT underlined adaptation after the second test round of eccentric exercise performed 7 days after the initial test round. The present findings showed for the first time that repeated eccentric exercise performed twice over 2 weeks protects the wrist extensor muscles from developing exacerbated pressure pain sensitivity. Thus, the addition of eccentric components to training regimens should be considered to induce protective adaptation.

  6. Mapping the spatial patterns of field traffic and traffic intensity to predict soil compaction risks at the field scale

    NASA Astrophysics Data System (ADS)

    Duttmann, Rainer; Kuhwald, Michael; Nolde, Michael

    2015-04-01

    Soil compaction is one of the main threats to cropland soils in present days. In contrast to easily visible phenomena of soil degradation, soil compaction, however, is obscured by other signals such as reduced crop yield, delayed crop growth, and the ponding of water, which makes it difficult to recognize and locate areas impacted by soil compaction directly. Although it is known that trafficking intensity is a key factor for soil compaction, until today only modest work has been concerned with the mapping of the spatially distributed patterns of field traffic and with the visual representation of the loads and pressures applied by farm traffic within single fields. A promising method for for spatial detection and mapping of soil compaction risks of individual fields is to process dGPS data, collected from vehicle-mounted GPS receivers and to compare the soil stress induced by farm machinery to the load bearing capacity derived from given soil map data. The application of position-based machinery data enables the mapping of vehicle movements over time as well as the assessment of trafficking intensity. It also facilitates the calculation of the trafficked area and the modeling of the loads and pressures applied to soil by individual vehicles. This paper focuses on the modeling and mapping of the spatial patterns of traffic intensity in silage maize fields during harvest, considering the spatio-temporal changes in wheel load and ground contact pressure along the loading sections. In addition to scenarios calculated for varying mechanical soil strengths, an example for visualizing the three-dimensional stress propagation inside the soil will be given, using the Visualization Toolkit (VTK) to construct 2D or 3D maps supporting to decision making due to sustainable field traffic management.

  7. SRS 2010 Vegetation Inventory GeoStatistical Mapping Results for Custom Reaction Intensity and Total Dead Fuels.

    SciTech Connect

    Edwards, Lloyd A.; Paresol, Bernard

    2014-09-01

    This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).

  8. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1982-03-23

    A long-lifetime light source is discussed with sufficiently low intensity to be used for reading a map or other writing at nightime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  9. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1983-10-11

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision is disclosed. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode. 1 fig.

  10. The USGS "Did You Feel It?" Macroseismic Intensity Maps: Lessons Learned from a Decade of Citizen-Empowered Seismology

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, C. B.; Quitoriano, V. R.; Dewey, J. W.

    2012-12-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity (MI) data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2012). The internet-based interface allows for a two-way path of communication between seismic data providers (scientists) and earthquake information recipients (citizens) by swapping roles: users looking for information from the USGS become data providers to the USGS. This role-reversal presents opportunities for data collection, generation of good will, and further communication and education. In addition, online MI collecting systems like DYFI have greatly expanded the range of quantitative analyses possible with MI data and taken the field of MI in important new directions. The maps are made more quickly, usually provide more complete coverage at higher resolution, and allow data collection at rates and quantities never before considered. Scrutiny of the USGS DYFI data indicates that one-decimal precision is warranted, and web-based geocoding services now permit precise locations. The high-quality, high-resolution, densely sampled MI assignments allow for peak ground motion (PGM) versus MI analyses well beyond earlier studies. For instance, Worden et al. (2011) used large volumes of data to confirm low standard deviations for multiple, proximal DYFI reports near a site, and they used the DYFI observations with PGM data to develop bidirectional, ground motion-intensity conversion equations. Likewise, Atkinson and Wald (2007) and Allen et al. (2012) utilized DYFI data to derive intensity prediction equations directly without intermediate conversion of ground-motion prediction equation metrics to intensity. Both types of relations are important for robust historic and real-time ShakeMaps, among other uses. In turn, ShakeMap and DYFI afford ample opportunities to

  11. HI Intensity Mapping: Parkes-2dFGRS and BAO science

    NASA Astrophysics Data System (ADS)

    Li, Yi-Chao; Staveley-Smith, Lister; Pen, Ue-Li; Chang, Tzu-Ching; Peterson, Jeff; Bandura, Kevin; Chen, Xuelei; Wang, Xin; Price, Danny; Montero-Castano, Maria; Anderson, Christopher; Voytek, Tabitha; Masui, Kiyoshi; Switzer, Eric; Wu, Feng-Quan; Timbie, Peter; Liao, Yu-Wei Victor; Li, zhigang

    2012-10-01

    We propose to scan the 2dF survey field with Parkes multibeam in driftscan mode to make a map of cosmological large scale structure. This allows a statistical detection of HI large scale structure out to z=0.15. In this cross correlation, the HI in ALL galaxies contributes, not only the bright ones, which significantly boosts the sensitivity.

  12. First dose-map measured with a polycrystalline diamond 2D dosimeter under an intensity modulated radiotherapy beam

    NASA Astrophysics Data System (ADS)

    Scaringella, M.; Zani, M.; Baldi, A.; Bucciolini, M.; Pace, E.; de Sio, A.; Talamonti, C.; Bruzzi, M.

    2015-10-01

    A prototype of bidimensional dosimeter made on a 2.5×2.5 cm2 active area polycrystalline Chemical Vapour Deposited (pCVD) diamond film, equipped with a matrix of 12×12 contacts connected to the read-out electronics, has been used to evaluate a map of dose under Intensity Modulated Radiation Therapy (IMRT) fields for a possible application in pre-treatment verifications of cancer treatments. Tests have been performed under a 6-10 MVRX beams with IMRT fields for prostate and breast cancer. Measurements have been taken by measuring the 144 pixels in different positions, obtained by shifting the device along the x/y axes to span a total map of 14.4×10 cm2. Results show that absorbed doses measured by our pCVD diamond device are consistent with those calculated by the Treatment Planning System (TPS).

  13. When Intensions Do Not Map onto Extensions: Individual Differences in Conceptualization

    ERIC Educational Resources Information Center

    Hampton, James A.; Passanisi, Alessia

    2016-01-01

    Concepts are represented in the mind through knowledge of their extensions (the class of items to which the concept applies) and intensions (features that distinguish that class of items). A common assumption among theories of concepts is that the 2 aspects are intimately related. Hence if there is systematic individual variation in concept…

  14. Imaging-intensive guidance with confirmatory physiological mapping for neurosurgery of movement disorders

    NASA Astrophysics Data System (ADS)

    Nauta, Haring J.; Bonnen, J. G.; Soukup, V. M.; Gonzalez, A.; Schiess, Mya C.

    1998-06-01

    Stereotactic surgery for movement disorders is typically performed using both imaging and physiologic guidance. However, different neurosurgical centers vary in the emphasis placed on either the imaging or the physiological mapping used to locate the target in the brain. The relative ease with which imaging data is acquired currently and the relative complexity and invasiveness associated with physiologic mapping prompted an evaluation of a method that seeks to maximize the imaging component of the guidance in order to minimize the need for the physiologic mapping. The evaluation was carried out in 37 consecutive stereotactic procedures for movement disorders in 28 patients. Imaging was performed with the patients in a stereotactic head frame. Imaging data from MRI in three planes, CT and positive contrast ventriculography was all referenced to this headframe and combined in a stereotactic planning computer. Physiologic definition of the target was performed by macroelectrode stimulation. Any discrepancy between the coordinates of the imaging predicted target and physiologically defined target was measured. The imaging- predicted target coordinates allowed the physiologically defined target to be reached on the first electrode penetration in 70% of procedures and within two penetrations in 92%. The mean error between imaging predicted and physiologically defined target position was 1.24 mm. Lesion location was confirmed by postoperative MRI. There were no permanent complications in this series. Functional outcomes were comparable to those achieved by centers mapping with multiple microelectrode penetrations. The findings suggest that while physiologic guidance remains necessary, the extent to which it is needed can be reduced by acquiring as much imaging information as possible in the initial stages of the procedure. These data can be combined and prioritized in a stereotactic planning computer such that the surgeon can take full advantage of the most reliable

  15. Discovery of 21-cm absorption in a zabs = 2.289 damped Lyman α system towards TXS 0311+430: the first low spin temperature absorber at z > 1

    NASA Astrophysics Data System (ADS)

    York, Brian A.; Kanekar, Nissim; Ellison, Sara L.; Pettini, Max

    2007-11-01

    We report the detection of HI 21-cm absorption from the z = 2.289 damped Lyman α system (DLA) towards TXS 0311+430 with the Green Bank Telescope. The 21-cm absorption has a velocity spread (between nulls) of ~110 km s-1 and an integrated optical depth of . We also present new Giant Metrewave Radio Telescope 602-MHz imaging of the radio continuum. TXS 0311+430 is unresolved at this frequency, indicating that the covering factor of the DLA is likely to be high. Combining the integrated optical depth with the DLA HIcolumn density of yields a spin temperature of Ts = (138 +/- 36) K, assuming a covering factor of unity. This is the first case of a low spin temperature (<350 K) in a z > 1 DLA and is among the lowest temperatures ever measured in any DLA. Indeed, the Ts measured for this DLA is similar to values measured in the Milky Way and local disc galaxies. We also determine a lower limit (Si/H) >~ 1/3 solar for the DLA metallicity, amongst the highest abundances measured in DLAs at any redshift. Based on low-redshift correlations, the low Ts, large 21-cm absorption width and high metallicity all suggest that the z ~ 2.289 DLA is likely to arise in a massive, luminous disc galaxy.

  16. A special kind of local structure in the CMB intensity maps: duel peak structure

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Ti-Pei

    2009-03-01

    We study the local structure of Cosmic Microwave Background (CMB) temperature maps released by the Wilkinson Microwave Anisotropy Probe (WMAP) team, and find a new kind of structure, which can be described as follows: a peak (or valley) of average temperature is often followed by a peak of temperature fluctuation that is 4° away. This structure is important for the following reasons: both the well known cold spot detected by Cruz et al. and the hot spot detected by Vielva et al. with the same technology (the third spot in their article) have such structure; more spots that are similar to them can be found on CMB maps and they also tend to be significant cold/hot spots; if we change the 4° characteristic into an artificial one, such as 3° or 5°, there will be less 'similar spots', and the temperature peaks or valleys will be less significant. The presented 'similar spots' have passed a strict consistency test which requires them to be significant on at least three different CMB temperature maps. We hope that this article could arouse some interest in the relationship of average temperature with temperature fluctuation in local areas; meanwhile, we are also trying to find an explanation for it which might be important to CMB observation and theory.

  17. Tsunami Intensity Mapping Along the Coast of Tamilnadu (India) During the Deadliest Indian Ocean Tsunami of December 26, 2004

    NASA Astrophysics Data System (ADS)

    Narayan, J. P.; Sharma, M. L.; Maheshwari, B. K.

    2006-07-01

    This paper presents tsunami intensity mapping and damage patterns along the surveyed coast of Tamilnadu (India) of the deadly Indian Ocean tsunami of December 26, 2004. The tsunami caused severe damage and claimed many victims in the coastal areas of eleven countries bordering the Indian Ocean. A twelve-stage tsunami intensity scale proposed by Papadopoulos and Imamura (2001) was followed to assign the intensity at the visited localities. Along the coast of the Indian mainland, tsunami damage sustained exclusively. Most severe damage was observed in Nagapattinam Beach, Nabiyarnagar, Vellaipalyam, and the Nagapattinam Port of Nagapattinum District on the east coast and Keelamanakudy village of Kanyakumari District on the western coast of Tamilnadu. The maximum assigned tsunami intensity was X+ at these localities. Minimum intensity V+ was received along the coast of Thanjavur, Puddukkotai and Ramnathpuram Districts in Palk Strait. The general observation reported by many people was that the first arrival was a tsunami crest. The largest tsunami waves were first arrivals on the eastern coast and the second arrivals on the western coast. Along the coast, people were unaware of the tsunami, and no anomalous behavior of ocean animals was reported. Good correlation was observed between the severity of damage and the presence of shadow zone of Sri Lanka, reflected waves from Sri Lanka and the Maldives Islands, variation in the width of the continental shelf, elevation of the coast and the presence of breakwaters. The presence of medu (naturally elevated landmass very close to the sea shore and elongated parallel to the coast) reduced the impact of the tsunami on the built environment.

  18. From Recollisions to the Knee: A Road Map for Double Ionization in Intense Laser Fields

    SciTech Connect

    Mauger, F.; Chandre, C.; Uzer, T.

    2010-01-29

    We examine the nature and statistical properties of electron-electron collisions in the recollision process in a strong laser field. The separation of the double ionization yield into sequential and nonsequential components leads to a bell-shaped curve for the nonsequential probability and a monotonically rising one for the sequential process. We identify key features of the nonsequential process and connect our findings in a simplified model which reproduces the knee shape for the probability of double ionization with laser intensity and associated trends.

  19. Dynamic T{sub 2}-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    SciTech Connect

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-28

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1 Degree-Sign C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T{sub 2}, since T{sub 2} increases linearly in fat during heating. T{sub 2}-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T{sub 2}. Calibration of T{sub 2}-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T{sub 2} and temperature with a thermocouple. A positive T{sub 2} temperature dependence in bone marrow of 20 ms/ Degree-Sign C was observed. Dynamic T{sub 2}-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  20. Mapping Farming Practices in Belgian Intensive Cropping Systems from Sentinel-1 SAR Time Series

    NASA Astrophysics Data System (ADS)

    Chome, G.; Baret, P. V.; Defourny, P.

    2016-08-01

    The environmental impact of the so-called conventional farming system calls for new farming practices reducing negative externalities. Emerging farming practices such as no-till and new inter-cropping management are promising tracks. The development of methods to characterize crop management across an entire region and to understand their spatial dimension offers opportunities to accompany the transition towards a more sustainable agriculture.This research takes advantage of the unmatched polarimetric and temporal resolutions of Sentinel-1 SAR C- band to develop a method to identify farming practices at the parcel level. To this end, the detection of changes in backscattering due to surface roughness modification (tillage, inter-crop cover destruction ...) is used to detect the farming management. The final results are compared to a reference dataset collected through an intensive field campaign. Finally, the performances are discussed in the perspective of practices monitoring of cropping systems through remote sensing.

  1. Correlation mapping: rapid method for retrieving microcirculation morphology from optical coherence tomography intensity images

    NASA Astrophysics Data System (ADS)

    Jonathan, E.; Enfield, J.; Leahy, M. J.

    2011-03-01

    The microcirculation plays a critical role is maintaining organ health and function by serving as a vascular are where trophic metabolism exchanges between blood and tissue takes place. To facilitate regular assessment in vivo, noninvasive microcirculation imagers are required in clinics. Among this group of clinical devices, are those that render microcirculation morphology such as nailfold capillaroscopy, a common device for early diagnosis and monitoring of microangiopathies. However, depth ambiguity disqualify this and other similar techniques in medical tomography where due to the 3-D nature of biological organs, imagers that support depth-resolved 2-D imaging and 3-D image reconstruction are required. Here, we introduce correlation map OCT (cmOCT), a promising technique for microcirculation morphology imaging that combines standard optical coherence tomography and an agile imaging analysis software based on correlation statistic. Promising results are presented of the microcirculation morphology images of the brain region of a small animal model as well as measurements of vessel geometry at bifurcations, such as vessel diameters, branch angles. These data will be useful for obtaining cardiovascular related characteristics such as volumetric flow, velocity profile and vessel-wall shear stress for circulatory and respiratory system.

  2. Mapping Global Urban Extent and Intensity for Environmental Monitoring and Modeling

    NASA Astrophysics Data System (ADS)

    Schneider, A.; Friedl, M. A.

    2007-05-01

    The human dimensions of global environmental change have received increased attention in policy, decision- making, research, and even the media. However, the influence of urban areas in global change processes is still often assumed to be negligible. Although local environmental conditions such as the urban heat island effect are well-documented, little or no work has focused on cross-scale interactions, or the ways in which local urban processes cumulatively impact global changes. Given the rapid rates of rural-urban migration, economic development and urban spatial expansion, it is becoming increasingly clear that the `ecological footprint' of cities may play a critical role in environmental changes at regional and global scales. Our understanding of the cumulative impacts of urban areas on natural systems has been limited foremost by a lack of reliable, accurate data on current urban form and extent at the global scale. The data sets that have emerged to fill this gap (LandScan, GRUMP, nighttime lights) suffer from a number of limitations that prevent widespread use. Building on our early efforts with MODIS data, our current work focuses on: (1) completing a new, validated map of global urban extent; and (2) developing methods to estimate the subpixel fraction of impervious surface, vegetation, and other land cover types within urbanized areas using coarse resolution satellite imagery. For the first task, a technique called boosting is used to improve classification accuracy and provides a means to integrate 500 m resolution MODIS data with ancillary data sources. For the second task, we present an approach for estimating percent cover that relies on continuous training data for a full range of city types. These exemplars are used as inputs to fuzzy neural network and regression tree algorithms to predict fractional amounts of land cover types with increased accuracy. Preliminary results for a global sample of 100 cities (which vary in population size, level of

  3. Maps showing petroleum exploration intensity and production in major Cambrian to Ordovician reservoir rocks in the Anadarko Basin

    USGS Publications Warehouse

    Henry, Mitch; Hester, Tim

    1996-01-01

    The Anadarko basin is a large, deep, two-stage Paleozoic basin (Feinstein, 1981) that is petroleum rich and generally well explored. The Anadarko basin province, a geogrphic area used here mostly for the convenience of mapping and data management, is defined by political boundaries that include the Anadarko basin proper. The boundaries of the province are identical to those used by the U.S. Geological Survey (USGS) in the 1995 National Assessment of United Stated Oil and Gas Resources. The data in this report, also identical to those used in the national assessment, are from several computerized data bases including Nehring Research Group (NRG) Associates Inc., Significant Oil and Gas Fields of the United States (1992); Petroleum Information (PI), Inc., Well History Control System (1991); and Petroleum Information (PI), Inc., Petro-ROM: Production data on CD-ROM (1993). Although generated mostly in response to the national assessment, the data presented here arc grouped differently and arc displayed and described in greater detail. In addition, the stratigraphic sequences discussed may not necessarily correlate with the "plays" of the 1995 national assessment. This report uses computer-generated maps to show drilling intensity, producing wells, major fields, and other geologic information relevant to petroleum exploration and production in the lower Paleozoic part of the Anadarko basin province as defined for the U.S. Geological Survey's 1995 national petroleum assessment. Hydrocarbon accumulations must meet a minimum standard of 1 million barrels of oil (MMBO) or 6 billion cubic feet of gas (BCFG) estimated ultimate recovery to be included in this report as a major field or revoir. Mapped strata in this report include the Upper Cambrian to Lower Ordovician Arbuckle and Low Ordovician Ellenburger Groups, the Middle Ordovician Simpson Group, and the Middle to Upper Ordovician Viola Group.

  4. Genome-wide association mapping of phenotypic traits subject to a range of intensities of natural selection in Timema cristinae.

    PubMed

    Comeault, Aaron A; Soria-Carrasco, Víctor; Gompert, Zach; Farkas, Timothy E; Buerkle, C Alex; Parchman, Thomas L; Nosil, Patrik

    2014-05-01

    The genetic architecture of adaptive traits can reflect the evolutionary history of populations and also shape divergence among populations. Despite this central role in evolution, relatively little is known regarding the genetic architecture of adaptive traits in nature, particularly for traits subject to known selection intensities. Here we quantitatively describe the genetic architecture of traits that are subject to known intensities of differential selection between host plant species in Timema cristinae stick insects. Specifically, we used phenotypic measurements of 10 traits and 211,004 single-nucleotide polymorphisms (SNPs) to conduct multilocus genome-wide association mapping. We identified a modest number of SNPs that were associated with traits and sometimes explained a large proportion of trait variation. These SNPs varied in their strength of association with traits, and both major and minor effect loci were discovered. However, we found no relationship between variation in levels of divergence among traits in nature and variation in parameters describing the genetic architecture of those same traits. Our results provide a first step toward identifying loci underlying adaptation in T. cristinae. Future studies will examine the genomic location, population differentiation, and response to selection of the trait-associated SNPs described here.

  5. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the

  6. 21 CM searches for DIM galaxies

    NASA Astrophysics Data System (ADS)

    Disney, Mike; Banks, Gareth

    1997-04-01

    We review very strong selection effects which operate against the detection of dim (i.e. low surface brightness) galaxies. The Parkes multibeam instrument offers a wonderful opportunity to turn up new populations of such galaxies. However, to explore the newly accessible parameter space, it will be necessary to survey both a very deep patch (105 s/pointing, limiting N hi ˜ 1018 cm-2) and a deep patch (104 s/pointing, limiting N hi ˜ 3 × 1018 cm-2) in carefully selected areas, and we outline the case to do this.

  7. Ultrasound line-by-line scanning method of spatial-temporal active cavitation mapping for high-intensity focused ultrasound.

    PubMed

    Ding, Ting; Zhang, Siyuan; Fu, Quanyou; Xu, Zhian; Wan, Mingxi

    2014-01-01

    This paper presented an ultrasound line-by-line scanning method of spatial-temporal active cavitation mapping applicable in a liquid or liquid filled tissue cavities exposed by high-intensity focused ultrasound (HIFU). Scattered signals from cavitation bubbles were obtained in a scan line immediately after one HIFU exposure, and then there was a waiting time of 2 s long enough to make the liquid back to the original state. As this pattern extended, an image was built up by sequentially measuring a series of such lines. The acquisition of the beamformed radiofrequency (RF) signals for a scan line was synchronized with HIFU exposure. The duration of HIFU exposure, as well as the delay of the interrogating pulse relative to the moment while HIFU was turned off, could vary from microseconds to seconds. The feasibility of this method was demonstrated in tap-water and a tap-water filled cavity in the tissue-mimicking gelatin-agar phantom as capable of observing temporal evolutions of cavitation bubble cloud with temporal resolution of several microseconds, lateral and axial resolution of 0.50 mm and 0.29 mm respectively. The dissolution process of cavitation bubble cloud and spatial distribution affected by cavitation previously generated were also investigated. Although the application is limited by the requirement for a gassy fluid (e.g. tap water, etc.) that allows replenishment of nuclei between HIFU exposures, the technique may be a useful tool in spatial-temporal cavitation mapping for HIFU with high precision and resolution, providing a reference for clinical therapy.

  8. Fiber-bundle microendoscopy with sub-diffuse reflectance spectroscopy and intensity mapping for multimodal optical biopsy of stratified epithelium.

    PubMed

    Greening, Gage J; James, Haley M; Powless, Amy J; Hutcheson, Joshua A; Dierks, Mary K; Rajaram, Narasimhan; Muldoon, Timothy J

    2015-12-01

    Early detection of structural or functional changes in dysplastic epithelia may be crucial for improving long-term patient care. Recent work has explored myriad non-invasive or minimally invasive "optical biopsy" techniques for diagnosing early dysplasia, such as high-resolution microendoscopy, a method to resolve sub-cellular features of apical epithelia, as well as broadband sub-diffuse reflectance spectroscopy, a method that evaluates bulk health of a small volume of tissue. We present a multimodal fiber-based microendoscopy technique that combines high-resolution microendoscopy, broadband (450-750 nm) sub-diffuse reflectance spectroscopy (sDRS) at two discrete source-detector separations (374 and 730 μm), and sub-diffuse reflectance intensity mapping (sDRIM) using a 635 nm laser. Spatial resolution, magnification, field-of-view, and sampling frequency were determined. Additionally, the ability of the sDRS modality to extract optical properties over a range of depths is reported. Following this, proof-of-concept experiments were performed on tissue-simulating phantoms made with poly(dimethysiloxane) as a substrate material with cultured MDA-MB-468 cells. Then, all modalities were demonstrated on a human melanocytic nevus from a healthy volunteer and on resected colonic tissue from a murine model. Qualitative in vivo image data is correlated with reduced scattering and absorption coefficients.

  9. Fiber-bundle microendoscopy with sub-diffuse reflectance spectroscopy and intensity mapping for multimodal optical biopsy of stratified epithelium

    PubMed Central

    Greening, Gage J.; James, Haley M.; Powless, Amy J.; Hutcheson, Joshua A.; Dierks, Mary K.; Rajaram, Narasimhan; Muldoon, Timothy J.

    2015-01-01

    Early detection of structural or functional changes in dysplastic epithelia may be crucial for improving long-term patient care. Recent work has explored myriad non-invasive or minimally invasive “optical biopsy” techniques for diagnosing early dysplasia, such as high-resolution microendoscopy, a method to resolve sub-cellular features of apical epithelia, as well as broadband sub-diffuse reflectance spectroscopy, a method that evaluates bulk health of a small volume of tissue. We present a multimodal fiber-based microendoscopy technique that combines high-resolution microendoscopy, broadband (450-750 nm) sub-diffuse reflectance spectroscopy (sDRS) at two discrete source-detector separations (374 and 730 μm), and sub-diffuse reflectance intensity mapping (sDRIM) using a 635 nm laser. Spatial resolution, magnification, field-of-view, and sampling frequency were determined. Additionally, the ability of the sDRS modality to extract optical properties over a range of depths is reported. Following this, proof-of-concept experiments were performed on tissue-simulating phantoms made with poly(dimethysiloxane) as a substrate material with cultured MDA-MB-468 cells. Then, all modalities were demonstrated on a human melanocytic nevus from a healthy volunteer and on resected colonic tissue from a murine model. Qualitative in vivo image data is correlated with reduced scattering and absorption coefficients. PMID:26713207

  10. Mapping seismic intensity using twitter data; A Case study: The February 26th, 2014 M5.9 Kefallinia (Greece) earthquake

    NASA Astrophysics Data System (ADS)

    Arapostathis, Stathis; Parcharidis, Isaak; Kalogeras, Ioannis; Drakatos, George

    2015-04-01

    In this paper we present an innovative approach for the development of seismic intensity maps in minimum time frame. As case study, a recent earthquake that occurred in Western Greece (Kefallinia Island, on February 26, 2014) is used. The magnitude of the earthquake was M=5.9 (Institute of Geodynamics - National Observatory of Athens). Earthquake's effects comprising damages in property and changes of the physical environment in the area. The innovative part of this research is that we use crowdsourcing as a source to assess macroseismic intensity information, coming out from twitter content. Twitter as a social media service with micro-blogging characteristics, a semantic structure which allows the storage of spatial content, and a high volume production of user generated content is a suitable source to obtain and extract knowledge related to macroseismic intensity in different geographic areas and in short time periods. Moreover the speed in which twitter content is generated affects us to have accurate results only a few hours after the occurrence of the earthquake. The method used in order to extract, evaluate and map the intensity related information is described in brief in this paper. At first, we pick out all the tweets that have been posted within the first 48 hours, including information related to intensity and refer to a geographic location. The geo-referencing of these tweets and their association with an intensity grade according to the European Macroseismic Scale (EMS98) based on the information they contain in text followed. Finally, we apply various spatial statistics and GIS methods, and we interpolate the values to cover all the appropriate geographic areas. The final output contains macroseismic intensity maps for the Lixouri area (Kefallinia Island), produced from twitter data that have been posted in the first six, twelve, twenty four and forty eight hours after the earthquake occurrence. Results are compared with other intensity maps for same

  11. VizieR Online Data Catalog: CMB intensity map from WMAP and Planck PR2 data (Bobin+, 2016)

    NASA Astrophysics Data System (ADS)

    Bobin, J.; Sureau, F.; Starck, J.-L.

    2016-05-01

    This paper presents a novel estimation of the CMB map reconstructed from the Planck 2015 data (PR2) and the WMAP nine-year data (Bennett et al., 2013ApJS..208...20B), which updates the CMB map we published in (Bobin et al., 2014A&A...563A.105B). This new map is based on the sparse component separation method L-GMCA (Bobin et al., 2013A&A...550A..73B). Additionally, the map benefits from the latest advances in this field (Bobin et al., 2015, IEEE Transactions on Signal Processing, 63, 1199), which allows us to accurately discriminate between correlated components. In this update to our previous work, we show that this new map presents significant improvements with respect to the available CMB map estimates. (3 data files).

  12. Intensive Training Course on Microplanning and School Mapping (Arusha, United Republic of Tanzania, March 8-26, 1982). Report.

    ERIC Educational Resources Information Center

    Caillods, F.; Heyman, S.

    This manual contains documentation of a 3-week course conducted jointly in March 1982 by the Tanzanian Ministry of Education and the International Institute for Educational Planning on the subject of the school map (or micro-plan). Prepared at the regional or subregional level, the school map aims at equalizing educational opportunities and…

  13. Delineating producing trends within plays by the use of computer-generated drill intensity maps, Denver basin, Colorado, Nebraska, and Wyoming

    SciTech Connect

    Higley, D.K.; Mast, R.F.; Gautier, D.L.

    1986-08-01

    Computer-generated exploration intensity maps were constructed for the Lower Cretaceous J and D sandstones of the Dakota Group in the Denver basin as part of the US Geological Survey's Federal Lands Assessment Program (FLAP). These maps illustrate producing and non-producing areas, distribution of hydrocarbon shows, and explored areas. They were compared with existing or generated maps of depositional environment, structure, thermal maturity, core porosity and permeability, and production data to delineate trends and to assess oil and gas resources for the Denver basin. Data from more than 36,000 drill holes in the Denver basin were entered into drill intensity programs, developed by the US Geological Survey, which tabulate show and production data for drill holes within 0.5 mi/sup 2/ grid cells. Primary production in the Denver basin is from stratigraphic traps of the J and D sandstones of the Dakota Group. Production and shows within the Dakota group are present in northeast-southwest-trending zones on the eastern flank of the basin in Colorado. Based on the incorporation of maps of depositional environment, porosity, and permeability, the trends may represent distributary-channel systems in this portion of the basin. Thermal maturation studies of J and D sandstone hydrocarbon source rocks indicate that much of the oil and gas present in the Dakota Group was generated deeper in the Denver basin. Hydrocarbon migration pathways from deep in the basin may also be indicated by these northeastern trends. Using drill intensity maps to illustrate zones of production is useful for delineating large-scale trends within plays and, therefore, for helping assess petroleum resources within the Denver basin. It may also be used to outline potential exploration targets by extending and analyzing the trends.

  14. Kinematics of the Local Universe. XIV. Measurements from the 21 cm line and the HI mass function from a homogeneous catalog gathered with the Nançay radio telescope

    NASA Astrophysics Data System (ADS)

    Theureau, G.; Coudreau, N.; Hallet, N.; Hanski, M. O.; Poulain, M.

    2017-03-01

    Aims: This paper presents 828 new 21 cm neutral hydrogen line measurements carried out with the FORT receiver of the meridian transit Nançay radio telescope (NRT) in the years 2000-2007. Methods: This observational program was part of a larger project aimed at collecting an exhaustive and magnitude-complete HI extragalactic catalog for Tully-Fisher applications. Through five massive data releases, the KLUN series has collected a homogeneous sample of 4876 HI-spectra of spiral galaxies, complete down to a flux of 5 Jy km s-1 and with declination δ > -40°. Results: We publish here the last release of the KLUN HI observational program, corresponding to the faint end of the survey, with HI masses ranging from 5 × 108 to 5 × 1010 solar masses. The size of this final sample is comparable to the catalogs based on the Arecibo and Parkes radio telescope campaigns, and it allows general HI mass distribution studies from a set of homogeneous radio measurements. Full Tables 2 and 3, together with HI profiles in ascii format, are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A104

  15. Sea floor maps showing topography, sun-illuminated topography, and backscatter intensity of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, P.C.; Middleton, T.J.; Fuller, S.J.

    2000-01-01

    This data set contains the sea floor topographic contours, sun-illuminated topographic imagery, and backscatter intensity generated from a multibeam sonar survey of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts, an area of approximately 1100 square nautical miles. The Stellwagen Bank NMS Mapping Project is designed to provide detailed maps of the Stellwagen Bank region's environments and habitats and the first complete multibeam topographic and sea floor characterization maps of a significant region of the shallow EEZ. Data were collected on four cruises over a two year period from the fall of 1994 to the fall of 1996. The surveys were conducted aboard the Candian Hydrographic Service vessel Frederick G. Creed, a SWATH (Small Waterplane Twin Hull) ship that surveys at speeds of 16 knots. The multibeam data were collected utilizing a Simrad Subsea EM 1000 Multibeam Echo Sounder (95 kHz) that is permanently installed in the hull of the Creed.

  16. Update on the Mapping of Prevalence and Intensity of Infection for Soil-Transmitted Helminth Infections in Latin America and the Caribbean: A Call for Action

    PubMed Central

    Saboyá, Martha Idalí; Catalá, Laura; Nicholls, Rubén Santiago; Ault, Steven Kenyon

    2013-01-01

    It is estimated that in Latin America and the Caribbean (LAC) at least 13.9 million preschool age and 35.4 million school age children are at risk of infections by soil-transmitted helminths (STH): Ascaris lumbricoides, Trichuris trichiura and hookworms (Necator americanus and Ancylostoma duodenale). Although infections caused by this group of parasites are associated with chronic deleterious effects on nutrition and growth, iron and vitamin A status and cognitive development in children, few countries in the LAC Region have implemented nationwide surveys on prevalence and intensity of infection. The aim of this study was to identify gaps on the mapping of prevalence and intensity of STH infections based on data published between 2000 and 2010 in LAC, and to call for including mapping as part of action plans against these infections. A total of 335 published data points for STH prevalence were found for 18 countries (11.9% data points for preschool age children, 56.7% for school age children and 31.3% for children from 1 to 14 years of age). We found that 62.7% of data points showed prevalence levels above 20%. Data on the intensity of infection were found for seven countries. The analysis also highlights that there is still an important lack of data on prevalence and intensity of infection to determine the burden of disease based on epidemiological surveys, particularly among preschool age children. This situation is a challenge for LAC given that adequate planning of interventions such as deworming requires information on prevalence to determine the frequency of needed anthelmintic drug administration and to conduct monitoring and evaluation of progress in drug coverage. PMID:24069476

  17. Mapping the Parameter Space of tDCS and Cognitive Control via Manipulation of Current Polarity and Intensity.

    PubMed

    Karuza, Elisabeth A; Balewski, Zuzanna Z; Hamilton, Roy H; Medaglia, John D; Tardiff, Nathan; Thompson-Schill, Sharon L

    2016-01-01

    In the cognitive domain, enormous variation in methodological approach prompts questions about the generalizability of behavioral findings obtained from studies of transcranial direct current stimulation (tDCS). To determine the impact of common variations in approach, we systematically manipulated two key stimulation parameters-current polarity and intensity-and assessed their impact on a task of inhibitory control (the Eriksen Flanker). Ninety participants were randomly assigned to one of nine experimental groups: three stimulation conditions (anode, sham, cathode) crossed with three intensity levels (1.0, 1.5, 2.0 mA). As participants performed the Flanker task, stimulation was applied over left dorsolateral prefrontal cortex (DLPFC; electrode montage: F3-RSO). The behavioral impact of these manipulations was examined using mixed effects linear regression. Results indicate a significant effect of stimulation condition (current polarity) on the magnitude of the interference effect during the Flanker; however, this effect was specific to the comparison between anodal and sham stimulation. Inhibitory control was therefore improved by anodal stimulation over the DLPFC. In the present experimental context, no reliable effect of stimulation intensity was observed, and we found no evidence that inhibitory control was impeded by cathodal stimulation. Continued exploration of the stimulation parameter space, particularly with more robustly powered sample sizes, is essential to facilitating cross-study comparison and ultimately working toward a reliable model of tDCS effects.

  18. Attention Filtering in the Design of Electronic Map Displays: A Comparison of Color-Coding, Intensity Coding, and Decluttering Techniques

    DTIC Science & Technology

    2000-06-01

    Yeh and Christopher D. Wickens Technical Report ARL-00-4/FED-LAB-00-2 June 2000 Prepared for U.S. Army Research Laboratory Interactive...highlighting is “valid” (Fisher, Coury , Tengs, and Duffy, 1989; Fisher and Tan, 1989). The advantages for color versus intensity coding are not clear...features and objects (Technical Report ARL-89- 8/AHEL-89-4). Savoy, IL: Aviation Research Laboratory. Fisher, D.L., Coury , B.G., Tengs, T.O., and Duffy

  19. Mapping the Parameter Space of tDCS and Cognitive Control via Manipulation of Current Polarity and Intensity

    PubMed Central

    Karuza, Elisabeth A.; Balewski, Zuzanna Z.; Hamilton, Roy H.; Medaglia, John D.; Tardiff, Nathan; Thompson-Schill, Sharon L.

    2016-01-01

    In the cognitive domain, enormous variation in methodological approach prompts questions about the generalizability of behavioral findings obtained from studies of transcranial direct current stimulation (tDCS). To determine the impact of common variations in approach, we systematically manipulated two key stimulation parameters—current polarity and intensity—and assessed their impact on a task of inhibitory control (the Eriksen Flanker). Ninety participants were randomly assigned to one of nine experimental groups: three stimulation conditions (anode, sham, cathode) crossed with three intensity levels (1.0, 1.5, 2.0 mA). As participants performed the Flanker task, stimulation was applied over left dorsolateral prefrontal cortex (DLPFC; electrode montage: F3-RSO). The behavioral impact of these manipulations was examined using mixed effects linear regression. Results indicate a significant effect of stimulation condition (current polarity) on the magnitude of the interference effect during the Flanker; however, this effect was specific to the comparison between anodal and sham stimulation. Inhibitory control was therefore improved by anodal stimulation over the DLPFC. In the present experimental context, no reliable effect of stimulation intensity was observed, and we found no evidence that inhibitory control was impeded by cathodal stimulation. Continued exploration of the stimulation parameter space, particularly with more robustly powered sample sizes, is essential to facilitating cross-study comparison and ultimately working toward a reliable model of tDCS effects. PMID:28082886

  20. Characterization of three-dimensional spatial aggregation and association patterns of brown rot symptoms within intensively mapped sour cherry trees

    PubMed Central

    Everhart, Sydney E.; Askew, Ashley; Seymour, Lynne; Holb, Imre J.; Scherm, Harald

    2011-01-01

    Background and Aims Characterization of spatial patterns of plant disease can provide insights into important epidemiological processes such as sources of inoculum, mechanisms of dissemination, and reproductive strategies of the pathogen population. Whilst two-dimensional patterns of disease (among plants within fields) have been studied extensively, there is limited information on three-dimensional patterns within individual plant canopies. Reported here are the detailed mapping of different symptom types of brown rot (caused by Monilinia laxa) in individual sour cherry tree (Prunus cerasus) canopies, and the application of spatial statistics to the resulting data points to determine patterns of symptom aggregation and association. Methods A magnetic digitizer was utilized to create detailed three-dimensional maps of three symptom types (blossom blight, shoot blight and twig canker) in eight sour cherry tree canopies during the green fruit stage of development. The resulting point patterns were analysed for aggregation (within a given symptom type) and pairwise association (between symptom types) using a three-dimensional extension of nearest-neighbour analysis. Key Results Symptoms of M. laxa infection were generally aggregated within the canopy volume, but there was no consistent pattern for one symptom type to be more or less aggregated than the other. Analysis of spatial association among symptom types indicated that previous year's twig cankers may play an important role in influencing the spatial pattern of current year's symptoms. This observation provides quantitative support for the epidemiological role of twig cankers as sources of primary inoculum within the tree. Conclusions Presented here is a new approach to quantify spatial patterns of plant disease in complex fruit tree canopies using point pattern analysis. This work provides a framework for quantitative analysis of three-dimensional spatial patterns within the finite tree canopy, applicable to

  1. Evaluation of Ground-Motion Modeling Techniques for Use in Global ShakeMap - A Critique of Instrumental Ground-Motion Prediction Equations, Peak Ground Motion to Macroseismic Intensity Conversions, and Macroseismic Intensity Predictions in Different Tectonic Settings

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.

    2009-01-01

    Regional differences in ground-motion attenuation have long been thought to add uncertainty in the prediction of ground motion. However, a growing body of evidence suggests that regional differences in ground-motion attenuation may not be as significant as previously thought and that the key differences between regions may be a consequence of limitations in ground-motion datasets over incomplete magnitude and distance ranges. Undoubtedly, regional differences in attenuation can exist owing to differences in crustal structure and tectonic setting, and these can contribute to differences in ground-motion attenuation at larger source-receiver distances. Herein, we examine the use of a variety of techniques for the prediction of several ground-motion metrics (peak ground acceleration and velocity, response spectral ordinates, and macroseismic intensity) and compare them against a global dataset of instrumental ground-motion recordings and intensity assignments. The primary goal of this study is to determine whether existing ground-motion prediction techniques are applicable for use in the U.S. Geological Survey's Global ShakeMap and Prompt Assessment of Global Earthquakes for Response (PAGER). We seek the most appropriate ground-motion predictive technique, or techniques, for each of the tectonic regimes considered: shallow active crust, subduction zone, and stable continental region.

  2. Spatial-temporal three-dimensional ultrasound plane-by-plane active cavitation mapping for high-intensity focused ultrasound in free field and pulsatile flow.

    PubMed

    Ding, Ting; Hu, Hong; Bai, Chen; Guo, Shifang; Yang, Miao; Wang, Supin; Wan, Mingxi

    2016-07-01

    Cavitation plays important roles in almost all high-intensity focused ultrasound (HIFU) applications. However, current two-dimensional (2D) cavitation mapping could only provide cavitation activity in one plane. This study proposed a three-dimensional (3D) ultrasound plane-by-plane active cavitation mapping (3D-UPACM) for HIFU in free field and pulsatile flow. The acquisition of channel-domain raw radio-frequency (RF) data in 3D space was performed by sequential plane-by-plane 2D ultrafast active cavitation mapping. Between two adjacent unit locations, there was a waiting time to make cavitation nuclei distribution of the liquid back to the original state. The 3D cavitation map equivalent to the one detected at one time and over the entire volume could be reconstructed by Marching Cube algorithm. Minimum variance (MV) adaptive beamforming was combined with coherence factor (CF) weighting (MVCF) or compressive sensing (CS) method (MVCS) to process the raw RF data for improved beamforming or more rapid data processing. The feasibility of 3D-UPACM was demonstrated in tap-water and a phantom vessel with pulsatile flow. The time interval between temporal evolutions of cavitation bubble cloud could be several microseconds. MVCF beamformer had a signal-to-noise ratio (SNR) at 14.17dB higher, lateral and axial resolution at 2.88times and 1.88times, respectively, which were compared with those of B-mode active cavitation mapping. MVCS beamformer had only 14.94% time penalty of that of MVCF beamformer. This 3D-UPACM technique employs the linear array of a current ultrasound diagnosis system rather than a 2D array transducer to decrease the cost of the instrument. Moreover, although the application is limited by the requirement for a gassy fluid medium or a constant supply of new cavitation nuclei that allows replenishment of nuclei between HIFU exposures, this technique may exhibit a useful tool in 3D cavitation mapping for HIFU with high speed, precision and resolution

  3. dispel4py : An Open Source Python Framework for Encoding, Mapping and Reusing Seismic Continuous Data Streams: Intensive Analysis and Data Mining.

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Krause, A.; Atkinson, M.; Spinuso, A.; Klampanos, I.; Magnoni, F.; Casarotti, E.; Vilotte, J. P.

    2015-12-01

    Scientific workflows are needed by many scientific communities, such as seismology, as they enable easy composition and execution of applications, enabling scientists to focus on their research without being distracted by arranging computation and data management. However, there are challenges to be addressed. In many systems users have to adapt their codes and data movement as they change from one HPC-architecture to another. They still need to be aware of the computing architectures available for achieving the best application performance. We present dispel4py, an open-source framework presented as a Python library for encoding and automating data-intensive scientific methods as a graph of operations coupled together by data-streams. It enables scientists to develop and experiment with their own data-intensive applications using their familiar work environment. These are then automatically mapped to a variety of HPC-architectures, i.e., MPI, multiprocessing, Storm and Spark frameworks, increasing the chances to reuse their applications in different computing resources. dispel4py comes with data provenance, as shown in the screenshot, and with an information registry that can be accessed transparently from within workflows. dispel4py has been enhanced with a new run-time adaptive compression strategy to reduce the data stream volume and a diagnostic tool which monitors workflow performance and computes the most efficient parallelisation to use. dispel4py has been used by seismologists in the project VERCE for seismic ambient noise cross-correlation applications and for orchestrated HPC wave simulation and data misfit analysis workflows; two data-intensive problems that are common in today's research practice. Both have been tested in several local computing resources and later submitted to a variety of European PRACE HPC-architectures (e.g. SuperMUC & CINECA) for longer runs without change. Results show that dispel4py is an easy tool for developing, sharing and

  4. The ALMA Spectroscopic Survey in the Hubble Ultra Deep Field: Implications for Spectral Line Intensity Mapping at Millimeter Wavelengths and CMB Spectral Distortions

    NASA Astrophysics Data System (ADS)

    Carilli, C. L.; Chluba, J.; Decarli, R.; Walter, F.; Aravena, M.; Wagg, J.; Popping, G.; Cortes, P.; Hodge, J.; Weiss, A.; Bertoldi, F.; Riechers, D.

    2016-12-01

    We present direct estimates of the mean sky brightness temperature in observing bands around 99 and 242 GHz due to line emission from distant galaxies. These values are calculated from the summed line emission observed in a blind, deep survey for spectral line emission from high redshift galaxies using ALMA (the ALMA spectral deep field observations “ASPECS” survey). In the 99 GHz band, the mean brightness will be dominated by rotational transitions of CO from intermediate and high redshift galaxies. In the 242 GHz band, the emission could be a combination of higher order CO lines, and possibly [C ii] 158 μm line emission from very high redshift galaxies (z ˜ 6-7). The mean line surface brightness is a quantity that is relevant to measurements of spectral distortions of the cosmic microwave background, and as a potential tool for studying large-scale structures in the early universe using intensity mapping. While the cosmic volume and the number of detections are admittedly small, this pilot survey provides a direct measure of the mean line surface brightness, independent of conversion factors, excitation, or other galaxy formation model assumptions. The mean surface brightness in the 99 GHZ band is: T B = 0.94 ± 0.09 μK. In the 242 GHz band, the mean brightness is: T B = 0.55 ± 0.033 μK. These should be interpreted as lower limits on the average sky signal, since we only include lines detected individually in the blind survey, while in a low resolution intensity mapping experiment, there will also be the summed contribution from lower luminosity galaxies that cannot be detected individually in the current blind survey.

  5. The USGS ``Did You Feel It?'' Internet-based Macroseismic Intensity Maps: Lessons Learned from a Decade of Online Data Collection (Invited)

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Quitoriano, V. R.; Hopper, M.; Mathias, S.; Dewey, J. W.

    2010-12-01

    Over the past decade, the U.S. Geological Survey’s “Did You Feel It?” (DYFI) system has automatically collected shaking and damage reports from Internet users immediately following earthquakes. This 10-yr stint of citizen-based science preceded the recently in vogue notion of "crowdsourcing" by nearly a decade. DYFI is a rapid and vast source of macroseismic data, providing quantitative and qualitative information about shaking intensities for earthquakes in the US and around the globe. Statistics attest to the abundance and rapid availability of these Internet-based macroseismic data: Over 1.8 million entries have been logged over the decade, and there are 30 events each with over 10,000 responses (230 events have over 1,000 entries). The greatest number of responses to date for an earthquake is over 78,000 for the April 2010, M7.2 Baja California, Mexico, event. Questionnaire response rates have reached 62,000 per hour (1,000 per min!) obviously requiring substantial web resource allocation and capacity. Outside the US, DYFI has gathered over 189,000 entries in 9,500 cities covering 140 countries since its global inception in late 2004. The rapid intensity data are automatically used in the Global ShakeMap (GSM) system, providing intensity constraints near population centers and in places without instrumental coverage (most of the world), and allowing for bias correction to the empirical prediction equations employed. ShakeMap has also been recently refined to automatically use macroseismic input data in their native form, and treat their uncertainties rigorously in concert with ground-motion data. Recent DYFI system improvements include a graphical user interface that allows seismic analysts to perform common functions, including map triggering and resizing , as well as sorting, searching, geocoding, and flagging entries. New web-based geolocation and geocoding services are being incorporated into DYFI for improving the accuracy of the users’ locations

  6. Mapping Water Stress Incidence and Intensity, Optimal Plant Populations, and Cultivar Duration for African Groundnut Productivity Enhancement

    PubMed Central

    Vadez, Vincent; Halilou, Oumarou; Hissene, Halime M.; Sibiry-Traore, Pierre; Sinclair, Thomas R.; Soltani, Afshin

    2017-01-01

    Groundnut production is limited in Sub-Saharan Africa and water deficit or “drought,” is often considered as the main yield-limiting factor. However, no comprehensive study has assessed the extent and intensity of “drought”-related yield decreases, nor has it explored avenues to enhance productivity. Hence, crop simulation modeling with SSM (Simple Simulation Modeling) was used to address these issues. To palliate the lack of reliable weather data as input to the model, the validity of weather data generated by Marksim, a weather generator, was tested. Marksim provided good weather representation across a large gradient of rainfall, representative of the region, and although rainfall generated by Marksim was above observations, run-off from Marksim data was also higher, and consequently simulations using observed or Marksim weather agreed closely across this gradient of weather conditions (root mean square of error = 99 g m-2; R2 = 0.81 for pod yield). More importantly, simulation of yield changes upon agronomic or genetic alterations in the model were equally predicted with Marksim weather. A 1° × 1° grid of weather data was generated. “Drought”-related yield reduction were limited to latitudes above 12–13° North in West Central Africa (WCA) and to the Eastern fringes of Tanzania and Mozambique in East South Africa (ESA). Simulation and experimental trials also showed that doubling the sowing density of Spanish cultivars from 20 to 40 plants m-2 would increase yield dramatically in both WCA and ESA. However, increasing density would require growers to invest in more seeds and likely additional labor. If these trade-offs cannot be alleviated, genetic improvement would then need to re-focus on a plant type that is adapted to the current low sowing density, like a runner rather than a bush plant type, which currently receives most of the genetic attention. Genetic improvement targeting “drought” adaptation should also be restricted to areas

  7. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  8. Using H/V Spectral Ratio Analysis to Map Sediment Thickness and to Explain Macroseismic Intensity Variation of a Low-Magnitude Seismic Swarm in Central Belgium

    NASA Astrophysics Data System (ADS)

    Van Noten, K.; Lecocq, T.; Camelbeeck, T.

    2013-12-01

    Between 2008 and 2010, the Royal Observatory of Belgium received numerous ';Did You Feel It'-reports related to a 2-year lasting earthquake swarm at Court-Saint-Etienne, a small town in a hilly area 20 km SE of Brussels, Belgium. These small-magnitude events (-0.7 ≤ ML ≤ 3.2, n = c. 300 events) were recorded both by the permanent seismometer network in Belgium and by a locally installed temporary seismic network deployed in the epicentral area. Relocation of the hypocenters revealed that the seismic swarm can be related to the reactivation of a NW-SE strike-slip fault at 3 to 6 km depth in the basement rocks of the Lower Palaeozoic London-Brabant Massif. This sequence caused a lot of emotion in the region because more than 60 events were felt by the local population. Given the small magnitudes of the seismic swarm, most events were more often heard than felt by the respondents, which is indicative of a local high-frequency earthquake source. At places where the bedrock is at the surface or where it is covered by thin alluvial sediments (<10 m), such as in incised river valleys and on hill slopes, reported macroseismic intensities are higher than those on hill tops where respondents live on a thicker Quaternary and Cenozoic sedimentary cover (> 30 m). In those river valleys that have a considerable alluvial sedimentary cover, macroseismic intensities are again lower. To explain this variation in macroseismic intensity we present a macroseismic analysis of all DYFI-reports related to the 2008-2010 seismic swarm and a pervasive H/V spectral ratio (HVSR) analysis of ambient noise measurements to model the thickness of sediments covering the London-Brabant Massif. The HVSR method is a very powerful tool to map the basement morphology, particularly in regions of unknown subsurface structure. By calculating the soil's fundamental frequency above boreholes, we calibrated the power-law relationship between the fundamental frequency, shear wave velocity and the thickness

  9. Exposure of young dairy cattle to Mycobacterium avium subsp. paratuberculosis (MAP) through intensive grazing of contaminated pastures in a herd positive for Johne’s disease

    PubMed Central

    Fecteau, Marie-Eve; Whitlock, Robert H.; Buergelt, Claus D.; Sweeney, Raymond W.

    2010-01-01

    This study investigated the susceptibility of 1- to 2-year-old cattle to Mycobacterium avium subsp. paratuberculosis (MAP) on pasture previously grazed by infected cattle. The exposure of yearling cattle to pastures contaminated with MAP resulted in infection with MAP, showing that age resistance to infection can be overcome by pressure of infection. PMID:20436867

  10. Redshifted 21cm Line Absorption by Intervening Galaxies

    NASA Astrophysics Data System (ADS)

    Briggs, F. H.

    The present generation of radio telescopes, combined with powerful new spectrometers, is opening a new age of redshifted radio absorption-line studies. Out-fitting of arrays of antennas, such as the European VLBI Network and the upgraded VLA, with flexibly tuned receivers, will measure sizes and kinematics of intervening galaxies as a function of cosmic time.

  11. Maps Showing Sea Floor Topography, Sun-Illuminated Sea Floor Topography, and Backscatter Intensity of Quadrangles 1 and 2 in the Great South Channel Region, Western Georges Bank

    USGS Publications Warehouse

    Valentine, Page C.; Middleton, Tammie J.; Malczyk, Jeremy T.; Fuller, Sarah J.

    2002-01-01

    The Great South Channel separates the western part of Georges Bank from Nantucket Shoals and is a major conduit for the exchange of water between the Gulf of Maine to the north and the Atlantic Ocean to the south. Water depths range mostly between 65 and 80 m in the region. A minimum depth of 45 m occurs in the east-central part of the mapped area, and a maximum depth of 100 m occurs in the northwest corner. The channel region is characterized by strong tidal and storm currents that flow dominantly north and south. Major topographic features of the seabed were formed by glacial and postglacial processes. Ice containing rock debris moved from north to south, sculpting the region into a broad shallow depression and depositing sediment to form the irregular depressions and low gravelly mounds and ridges that are visible in parts of the mapped area. Many other smaller glacial featuresprobably have been eroded by waves and currents at worksince the time when the region, formerly exposed bylowered sea level or occupied by ice, was invaded by the sea. The low, irregular and somewhat lumpy fabric formed by the glacial deposits is obscured in places by drifting sand and by the linear, sharp fabric formed by modern sand features. Today, sand transported by the strong north-south-flowing tidal and storm currents has formed large, east-west-trending dunes. These bedforms (ranging between 5 and 20 m in height) contrast strongly with, and partly mask, the subdued topography of the older glacial features.

  12. Mapping sound intensities by seating position in a university concert band: A risk of hearing loss, temporary threshold shifts, and comparisons with standards of OSHA and NIOSH

    NASA Astrophysics Data System (ADS)

    Holland, Nicholas Vedder, III

    Exposure to loud sounds is one of the leading causes of hearing loss in the United States. The purpose of the current research was to measure the sound pressure levels generated within a university concert band and determine if those levels exceeded permissible sound limits for exposure according to criteria set by the Occupational Safety and Health Administration (OSHA) and the National Institute of Occupational Safety and Health (NIOSH). Time-weighted averages (TWA) were obtained via a dosimeter during six rehearsals for nine members of the ensemble (plus the conductor), who were seated in frontal proximity to "instruments of power" (trumpets, trombones, and percussion; (Backus, 1977). Subjects received audiometer tests prior to and after each rehearsal to determine any temporary threshold shifts (TTS). Single sample t tests were calculated to compare TWA means and the maximum sound intensity exposures set by OSHA and NIOSH. Correlations were calculated between TWAs and TTSs, as well as TTSs and the number of semesters subjects reported being seated in proximity to instruments of power. The TWA-OSHA mean of 90.2 dBA was not significantly greater than the specified OSHA maximum standard of 90.0 dBA (p > .05). The TWA-NIOSH mean of 93.1 dBA was, however, significantly greater than the NIOSH specified maximum standard of 85.0 dBA (p < .05). The correlation between TWAs and TTSs was considered weak (r = .21 for OSHA, r = .20 for NIOSH); the correlation between TTSs and semesters of proximity to instruments of power was also considered weak (r = .13). TWAs cumulatively exceeded both association's sound exposure limits at 11 specified locations (nine subjects and both ears of the conductor) throughout the concert band's rehearsals. In addition, hearing acuity, as determined by TTSs, was substantially affected negatively by the intensities produced in the concert band. The researcher concluded that conductors, as well as their performers, must be aware of possible

  13. Data-Intensive Benchmarking Suite

    SciTech Connect

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  14. The response of the inductively coupled argon plasma to solvent plasma load: spatially resolved maps of electron density obtained from the intensity of one argon line

    NASA Astrophysics Data System (ADS)

    Weir, D. G. J.; Blades, M. W.

    1994-12-01

    A survey of spatially resolved electron number density ( ne) in the tail cone of the inductively coupled argon plasma (ICAP) is presented: all of the results of the survey have been radially inverted by numerical, asymmetric Abel inversion. The survey extends over the entire volume of the plasma beyond the exit of the ICAP torch; It extends over distances of z = 5-25 mm downstream from the induction coil, and over radial distances of ± 8 mm from the discharge axis. The survey also explores a range of inner argon flow rates ( QIN), solvent plasma load ( Qspl) and r.f. power: moreover, it explores loading by water, methanol and chloroform. Throughout the survey, ne was determined from the intensity of one, optically thin argon line, by a method which assumes that the atomic state distribution function (ASDF) for argon lies close to local thermal equilibrium (LTE). The validity of this assumption is reviewed. Also examined are the discrepancies between ne from this method and ne from Stark broadening measurements. With the error taken into account, the results of the survey reveal how time averaged values of ne in the ICAP respond over an extensive, previously unexplored range of experimental parameters. Moreover, the spatial information lends insight into how the thermal conditions and the transport of energy respond. Overall, the response may be described in terms of energy consumption along the axial channel and thermal pinch within the induction region. The predominating effect depends on the solvent plasma load, the solvent composition, the robustness of the discharge, and the distribution of solvent material over the argon stream.

  15. Handmade Multitextured Maps.

    ERIC Educational Resources Information Center

    Trevelyan, Simon

    1984-01-01

    Tactile maps for visually impaired persons can be made by drawing lines with an aqueous adhesive solution, dusting with thermoengraving powder, and exposing the card to a source of intense heat (such as a heat gun or microwave oven). A raised line map results. (CL)

  16. Genetic Mapping

    MedlinePlus

    ... Fact Sheets Fact Sheets En Español: Mapeo Genético Genetic Mapping What is genetic mapping? How do researchers ... genetic map? What are genetic markers? What is genetic mapping? Among the main goals of the Human ...

  17. Planetary maps

    USGS Publications Warehouse

    ,

    1992-01-01

    An important goal of the USGS planetary mapping program is to systematically map the geology of the Moon, Mars, Venus, and Mercury, and the satellites of the outer planets. These geologic maps are published in the USGS Miscellaneous Investigations (I) Series. Planetary maps on sale at the USGS include shaded-relief maps, topographic maps, geologic maps, and controlled photomosaics. Controlled photomosaics are assembled from two or more photographs or images using a network of points of known latitude and longitude. The images used for most of these planetary maps are electronic images, obtained from orbiting television cameras, various optical-mechanical systems. Photographic film was only used to map Earth's Moon.

  18. Seabed maps showing topography, ruggedness, backscatter intensity, sediment mobility, and the distribution of geologic substrates in Quadrangle 6 of the Stellwagen Bank National Marine Sanctuary Region offshore of Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, Page C.; Gallea, Leslie B.

    2015-11-10

    The U.S. Geological Survey (USGS), in cooperation with the National Oceanic and Atmospheric Administration's National Marine Sanctuary Program, has conducted seabed mapping and related research in the Stellwagen Bank National Marine Sanctuary (SBNMS) region since 1993. The area is approximately 3,700 square kilometers (km2) and is subdivided into 18 quadrangles. Seven maps, at a scale of 1:25,000, of quadrangle 6 (211 km2) depict seabed topography, backscatter, ruggedness, geology, substrate mobility, mud content, and areas dominated by fine-grained or coarse-grained sand. Interpretations of bathymetric and seabed backscatter imagery, photographs, video, and grain-size analyses were used to create the geology-based maps. In all, data from 420 stations were analyzed, including sediment samples from 325 locations. The seabed geology map shows the distribution of 10 substrate types ranging from boulder ridges to immobile, muddy sand to mobile, rippled sand. Mapped substrate types are defined on the basis of sediment grain-size composition, surface morphology, sediment layering, the mobility or immobility of substrate surfaces, and water depth range. This map series is intended to portray the major geological elements (substrates, topographic features, processes) of environments within quadrangle 6. Additionally, these maps will be the basis for the study of the ecological requirements of invertebrate and vertebrate species that utilize these substrates and guide seabed management in the region.

  19. Contour Mapping

    NASA Technical Reports Server (NTRS)

    1995-01-01

    In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.

  20. GIS-mapping of environmental assessment of the territories in the region of intense activity for the oil and gas complex for achievement the goals of the Sustainable Development (on the example of Russia)

    NASA Astrophysics Data System (ADS)

    Yermolaev, Oleg

    2014-05-01

    The uniform system of complex scientific-reference ecological-geographical should act as a base for the maintenance of the Sustainable Development (SD) concept in the territories of the Russian Federation subjects or certain regions. In this case, the assessment of the ecological situation in the regions can be solved by the conjugation of the two interrelated system - the mapping and the geoinformational. The report discusses the methodological aspects of the Atlas-mapping for the purposes of SD in the regions of Russia. The Republic of Tatarstan viewed as a model territory where a large-scale oil-gas complex "Tatneft" PLC works. The company functions for more than 60 years. Oil fields occupy an area of more than 38 000 km2; placed in its territory about 40 000 oil wells, more than 55 000 km of pipelines; more than 3 billion tons of oil was extracted. Methods for to the structure and requirements for the Atlas's content were outlined. The approaches to mapping of "an ecological dominant" of SD conceptually substantiated following the pattern of a large region of Russia. Several trends of thematically mapping were suggested to be distinguished in the Atlas's structure: • The background history of oil-fields mine working; • The nature preservation technologies while oil extracting; • The assessment of natural conditions of a humans vital activity; • Unfavorable and dangerous natural processes and phenomena; • The anthropogenic effect and environmental surroundings change; • The social-economical processes and phenomena. • The medical-ecological and geochemical processes and phenomena; Within these groups the other numerous groups can distinguished. The maps of unfavorable and dangerous processes and phenomena subdivided in accordance with the types of processes - of endogenous and exogenous origin. Among the maps of the anthropogenic effects on the natural surroundings one can differentiate the maps of the influence on different nature's spheres

  1. USGS maps

    USGS Publications Warehouse

    ,

    2005-01-01

    Discover a small sample of the millions of maps produced by the U.S. Geological Survey (USGS) in its mission to map the Nation and survey its resources. This booklet gives a brief overview of the types of maps sold and distributed by the USGS through its Earth Science Information Centers (ESIC) and also available from business partners located in most States. The USGS provides a wide variety of maps, from topographic maps showing the geographic relief and thematic maps displaying the geology and water resources of the United States, to special studies of the moon and planets.

  2. Tactile localization depends on stimulus intensity.

    PubMed

    Steenbergen, Peter; Buitenweg, Jan R; Trojan, Jörg; Veltink, Peter H

    2014-02-01

    Few experimental data are available about the influence of stimulus intensity on localization of cutaneous stimuli. The localization behavior of an individual as function of the veridical stimulus sites can be represented in the form of a perceptual map. It is unknown how the intensity of cutaneous stimuli influences these perceptual maps. We investigated the effect of stimulus intensity on trial-to-trial localization variability and on perceptual maps. We applied non-painful electrocutaneous stimuli of three different intensities through seven surface electrodes on the lower arm of healthy participants. They localized the stimuli on a tablet monitor mounted directly above their arm, on which a photograph of this arm was presented. The length of the arm over which the stimuli were localized was contracted when compared to the real electrode positions. This length increased toward veridical with increasing stimulus intensity. The trial-to-trial variance of the localizations dropped significantly with increasing intensity. Furthermore, localization biases of individual stimulus positions were shown to decrease with increasing stimulus intensity. We conclude that tactile stimuli are localized closer to veridical with increasing intensity in two respects: the localizations become more consistent and more accurate.

  3. Virtual micro-intensity modulated radiation therapy.

    PubMed

    Siochi, R A

    2000-11-01

    Virtual micro-intensity modulated radiation therapy (VMIMRT) combines a 10 x 5 mm2 intensity map with a 5 x 10 mm2 intensity map, delivered at orthogonal collimator settings. The superposition of these component maps (CM) yields a 5 x 5 mm2 virtual micro-intensity map (VMIM) that can be delivered with a 1 cm leaf width MLC. A pair of CMs with optimal delivery efficiency and quality must be chosen, since a given VMIM can be delivered using several different pairs. This is possible since, for each group of four VMIM cells that can be covered by an MLC leaf in either collimator orientation, the minimum intensity can be delivered from either collimator setting. By varying the proportions of the minimum values that go into each CM, one can simultaneously minimize the number of potential junction effects and the number of segments required to deliver the VMIM. The minimization is achieved by reducing high leaf direction gradients in the CMs. Several pseudoclinical and random VMIMs were studied to determine the applicability of this new technique. A nine level boost map was also studied to investigate dosimetric and spatial resolution issues. Finally, clinical issues for this technique are discussed.

  4. RICH MAPS

    EPA Science Inventory

    Michael Goodchild recently gave eight reasons why traditional maps are limited as communication devices, and how interactive internet mapping can overcome these limitations. In the past, many authorities in cartography, from Jenks to Bertin, have emphasized the importance of sim...

  5. Historical Mapping

    USGS Publications Warehouse

    ,

    1999-01-01

    Maps become out of date over time. Maps that are out of date, however, can be useful to historians, attorneys, environmentalists, genealogists, and others interested in researching the background of a particular area. Local historians can compare a series of maps of the same area compiled over a long period of time to learn how the area developed. A succession of such maps can provide a vivid picture of how a place changed over time.

  6. H I maps of S0 galaxies with polar rings

    SciTech Connect

    Van gorkom, J.H.; Schechter, P.L.; Kristian, J.

    1987-03-01

    VLA maps in the 21 cm line of neutral hydrogen have been obtained for three S0 galaxies with polar rings, and an upper limit on H I has been obtained for a fourth system. Polar rings span a continuum, ranging from those in which the H I seems to be in a relatively stable configuration, producing stars throughout its extent, to those in which the H I is very asymmetric, with stars forming only at the inner edge of an H I disk. A deep CCD image of MGC -5-7-1 shows arcs and filaments, some of which coincide with the likewise chaotic H I. If the system formed as the result of the merger of a gas-rich system with an S0 galaxy, the gas-rich system must have included considerable numbers of stars. 25 references.

  7. Topographic mapping

    USGS Publications Warehouse

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  8. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  9. Adding Context to James Webb Space Telescope Surveys with Current and Future 21 cm Radio Observations

    NASA Astrophysics Data System (ADS)

    Beardsley, A. P.; Morales, M. F.; Lidz, A.; Malloy, M.; Sutter, P. M.

    2015-02-01

    Infrared and radio observations of the Epoch of Reionization promise to revolutionize our understanding of the cosmic dawn, and major efforts with the JWST, MWA, and HERA are underway. While measurements of the ionizing sources with infrared telescopes and the effect of these sources on the intergalactic medium with radio telescopes should be complementary, to date the wildly disparate angular resolutions and survey speeds have made connecting proposed observations difficult. In this paper we develop a method to bridge the gap between radio and infrared studies. While the radio images may not have the sensitivity and resolution to identify individual bubbles with high fidelity, by leveraging knowledge of the measured power spectrum we are able to separate regions that are likely ionized from largely neutral, providing context for the JWST observations of galaxy counts and properties in each. By providing the ionization context for infrared galaxy observations, this method can significantly enhance the science returns of JWST and other infrared observations.

  10. Low noise parametric amplifiers for radio astronomy observations at 18-21 cm wavelength

    NASA Technical Reports Server (NTRS)

    Kanevskiy, B. Z.; Veselov, V. M.; Strukov, I. A.; Etkin, V. S.

    1974-01-01

    The principle characteristics and use of SHF parametric amplifiers for radiometer input devices are explored. Balanced parametric amplifiers (BPA) are considered as the SHF signal amplifiers allowing production of the amplifier circuit without a special filter to achieve decoupling. Formulas to calculate the basic parameters of a BPA are given. A modulator based on coaxial lines is discussed as the input element of the SHF. Results of laboratory tests of the receiver section and long-term stability studies of the SHF sector are presented.

  11. How Ewen and Purcell discovered the 21-cm interstellar hydrogen line.

    NASA Astrophysics Data System (ADS)

    Stephan, K. D.

    1999-02-01

    The story of how Harold Irving Ewen and Edward Mills Purcell detected the first spectral line ever observed in radio astronomy, in 1951, has been told for general audiences by Robert Buderi (1996). The present article has a different purpose. The technical roots of Ewen and Purcell's achievement reveal much about the way science often depends upon "borrowed" technologies, which were not developed with the needs of science in mind. The design and construction of the equipment is described in detail. As Ewen's photographs, records, and recollections show, he and Purcell had access to an unusual combination of scientific knowledge, engineering know-how, critical hardware, and technical assistance at Harvard, in 1950 and 1951. This combination gave them a competitive edge over similar research groups in Holland and Australia, who were also striving to detect the hydrogen line, and who succeeded only weeks after the Harvard researchers did. The story also shows that Ewen and Purcell did their groundbreaking scientific work in the "small-science" style that prevailed before World War II, while receiving substantial indirect help from one of the first big-science projects at Harvard.

  12. Will nonlinear peculiar velocity and inhomogeneous reionization spoil 21 cm cosmology from the epoch of reionization?

    PubMed

    Shapiro, Paul R; Mao, Yi; Iliev, Ilian T; Mellema, Garrelt; Datta, Kanan K; Ahn, Kyungjin; Koda, Jun

    2013-04-12

    The 21 cm background from the epoch of reionization is a promising cosmological probe: line-of-sight velocity fluctuations distort redshift, so brightness fluctuations in Fourier space depend upon angle, which linear theory shows can separate cosmological from astrophysical information. Nonlinear fluctuations in ionization, density, and velocity change this, however. The validity and accuracy of the separation scheme are tested here for the first time, by detailed reionization simulations. The scheme works reasonably well early in reionization (≲40% ionized), but not late (≳80% ionized).

  13. An intensity scale for riverine flooding

    USGS Publications Warehouse

    Fulford, J.M.; ,

    2004-01-01

    Recent advances in the availability and accuracy of multi-dimensional flow models, the advent of precise elevation data for floodplains (LIDAR), and geographical GIS allow the creation of hazard maps that more correctly reflect the varying levels of flood-damage risk across a floodplain when inundatecby floodwaters. Using intensity scales for wind damages, an equivalent water-damage flow intensity scale has been developed that ranges from 1 (minimal effects) to 10 (major damages to most structures). This flow intensity scale, FIS, is portrayed on a map as color-coded areas of increasing flow intensity. This should prove to be a valuable tool to assess relative risk to people and property in known flood-hazard areas.

  14. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  15. Mapping Van

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A NASA Center for the Commercial Development of Space (CCDS) - developed system for satellite mapping has been commercialized for the first time. Global Visions, Inc. maps an area while driving along a road in a sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. Data is fed into a computerized geographic information system (GIS). The resulting amps can be used for tax assessment purposes, emergency dispatch vehicles and fleet delivery companies as well as other applications.

  16. Genome mapping

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genome maps can be thought of much like road maps except that, instead of traversing across land, they traverse across the chromosomes of an organism. Genetic markers serve as landmarks along the chromosome and provide researchers information as to how close they may be to a gene or region of inter...

  17. Undersea Mapping.

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    1991-01-01

    Presented is a cooperative learning activity in which students assume different roles in an effort to produce a relief map of the ocean floor. Materials, procedures, definitions, student roles, and questions are discussed. A reproducible map for the activity is provided. (CW)

  18. Question Mapping

    ERIC Educational Resources Information Center

    Martin, Josh

    2012-01-01

    After accepting the principal position at Farmersville (TX) Junior High, the author decided to increase instructional rigor through question mapping because of the success he saw using this instructional practice at his prior campus. Teachers are the number one influence on student achievement (Marzano, 2003), so question mapping provides a…

  19. Map Adventures.

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet about maps, with seven accompanying lessons, is appropriate for students in grades K-3. Students learn basic concepts for visualizing objects from different perspectives and how to understand and use maps. Lessons in the packet center on a story about a little girl, Nikki, who rides in a hot-air balloon that gives her, and…

  20. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  1. Collection Mapping.

    ERIC Educational Resources Information Center

    Harbour, Denise

    2002-01-01

    Explains collection mapping for library media collections. Discusses purposes for creating collection maps, including helping with selection and weeding decisions, showing how the collection supports the curriculum, and making budget decisions; and methods of data collection, including evaluating a collaboratively taught unit with the classroom…

  2. A symbiotic approach to SETI observations: use of maps from the Westerbork Synthesis Radio Telescope.

    PubMed

    Tarter, J C; Israel, F P

    1982-01-01

    High spatial resolution continuum radio maps produced by the Westerbork Synthesis Radio Telescope (WSRT) of The Netherlands at frequencies near the 21 cm HI line have been examined for anomalous sources of emmission coincident with the locations of nearby bright stars. From a total of 542 stellar positions investigated, no candidates for radio stars or ETI signals were discovered to formal limits on the minimum detectable signal ranging from 7.7 x 10(-22) W/m2 to 6.4 x 10(-24) W/m2. This preliminary study has verified that data collected by radio astronomers at large synthesis arrays can profitably be analysed for SETI signals (in a non-interfering manner) provided only that the data are available in the form of a more or less standard two dimensional map format.

  3. A symbiotic approach to SETI observations: use of maps from the Westerbork Synthesis Radio Telescope

    NASA Technical Reports Server (NTRS)

    Tarter, J. C.; Israel, F. P.

    1982-01-01

    High spatial resolution continuum radio maps produced by the Westerbork Synthesis Radio Telescope (WSRT) of The Netherlands at frequencies near the 21 cm HI line have been examined for anomalous sources of emmission coincident with the locations of nearby bright stars. From a total of 542 stellar positions investigated, no candidates for radio stars or ETI signals were discovered to formal limits on the minimum detectable signal ranging from 7.7 x 10(-22) W/m2 to 6.4 x 10(-24) W/m2. This preliminary study has verified that data collected by radio astronomers at large synthesis arrays can profitably be analysed for SETI signals (in a non-interfering manner) provided only that the data are available in the form of a more or less standard two dimensional map format.

  4. Mapping Children--Mapping Space.

    ERIC Educational Resources Information Center

    Pick, Herbert L., Jr.

    Research is underway concerning the way the perception, conception, and representation of spatial layout develops. Three concepts are important here--space itself, frame of reference, and cognitive map. Cognitive map refers to a form of representation of the behavioral space, not paired associate or serial response learning. Other criteria…

  5. Mapping Biodiversity.

    ERIC Educational Resources Information Center

    World Wildlife Fund, Washington, DC.

    This document features a lesson plan that examines how maps help scientists protect biodiversity and how plants and animals are adapted to specific ecoregions by comparing biome, ecoregion, and habitat. Samples of instruction and assessment are included. (KHR)

  6. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps.

  7. Map Separates

    USGS Publications Warehouse

    ,

    2001-01-01

    U.S. Geological Survey (USGS) topographic maps are printed using up to six colors (black, blue, green, red, brown, and purple). To prepare your own maps or artwork based on maps, you can order separate black-and-white film positives or negatives for any color printed on a USGS topographic map, or for one or more of the groups of related features printed in the same color on the map (such as drainage and drainage names from the blue plate.) In this document, examples are shown with appropriate ink color to illustrate the various separates. When purchased, separates are black-and-white film negatives or positives. After you receive a film separate or composite from the USGS, you can crop, enlarge or reduce, and edit to add or remove details to suit your special needs. For example, you can adapt the separates for making regional and local planning maps or for doing many kinds of studies or promotions by using the features you select and then printing them in colors of your choice.

  8. Venus mapping

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Morgan, H. F.; Sucharski, Robert

    1991-01-01

    Semicontrolled image mosaics of Venus, based on Magellan data, are being compiled at 1:50,000,000, 1:10,000,000, 1:5,000,000, and 1:1,000,000 scales to support the Magellan Radar Investigator (RADIG) team. The mosaics are semicontrolled in the sense that data gaps were not filled and significant cosmetic inconsistencies exist. Contours are based on preliminary radar altimetry data that is subjected to revision and improvement. Final maps to support geologic mapping and other scientific investigations, to be compiled as the dataset becomes complete, will be sponsored by the Planetary Geology and Geophysics Program and/or the Venus Data Analysis Program. All maps, both semicontrolled and final, will be published as I-maps by the United States Geological Survey. All of the mapping is based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control on planets where framing cameras were used, is not feasible with the radar images of Venus, although an eventual shift of coordinate system to a revised spin-axis location is anticipated. This is expected to be small enough that it will affect only large-scale maps.

  9. Hispanic Inpatient Pain Intensity.

    PubMed

    McDonald, Deborah Dillon; Ambrose, Margaret; Morey, Barbara

    2015-11-01

    Hispanic adults experience significant pain, but little is known about their pain during hospitalization. The purpose of this research was to describe Hispanic inpatients' pain intensity and compare their pain intensity with that of non-Hispanic patients. A post hoc descriptive design was used to examine 1,466 Hispanic inpatients' medical records (63.2% English speakers) and 12,977 non-Hispanic inpatients' medical records from one hospital for 2012. Mean documented pain intensity was mild for both Hispanic and non-Hispanic inpatients. Pain intensity was greater for English-speaking Hispanic patients than Spanish speakers. The odds of being documented with moderate or greater pain intensity decreased 30% for Spanish-speaking patients. Greater pain intensity documented for English-speaking Hispanic inpatients suggests underreporting of pain intensity by Spanish-speaking patients. Practitioners should use interpreter services when assessing and treating pain with patients who speak languages different from the practitioners' language(s).

  10. Ranking welding intensity in pyroclastic deposits

    NASA Astrophysics Data System (ADS)

    Quane, Steven L.; Russell, James K.

    2005-02-01

    Welding of pyroclastic deposits involves flattening of glassy pyroclasts under a compactional load at temperatures above the glass transition temperature. Progressive welding is recorded by changes in the petrographic (e.g., fabric) and physical (e.g., density) properties of the deposits. Mapping the intensity of welding can be integral to studies of pyroclastic deposits, but making systematic comparisons between deposits can be problematical. Here we develop a scheme for ranking welding intensity in pyroclastic deposits on the basis of petrographic textural observations (e.g., oblateness of pumice lapilli and micro-fabric orientation) and measurements of physical properties, including density, porosity, point load strength and uniaxial compressive strength. Our dataset comprises measurements on 100 samples collected from a single cooling unit of the Bandelier Tuff and parallel measurements on 8 samples of more densely welded deposits. The proposed classification comprises six ranks of welding intensity ranging from unconsolidated (Rank I) to obsidian-like vitrophyre (Rank VI) and should allow for reproducible mapping of subtle variations in welding intensity between different deposits. The application of the ranking scheme is demonstrated by using published physical property data on welded pyroclastic deposits to map the total accumulated strain and to reconstruct their pre-welding thicknesses.

  11. Intensive Care, Intense Conflict: A Balanced Approach.

    PubMed

    Paquette, Erin Talati; Kolaitis, Irini N

    2015-01-01

    Caring for a child in a pediatric intensive care unit is emotionally and physically challenging and often leads to conflict. Skilled mediators may not always be available to aid in conflict resolution. Careproviders at all levels of training are responsible for managing difficult conversations with families and can often prevent escalation of conflict. Bioethics mediators have acknowledged the important contribution of mediation training in improving clinicians' skills in conflict management. Familiarizing careproviders with basic mediation techniques is an important step towards preventing escalation of conflict. While training in effective communication is crucial, a sense of fairness and justice that may only come with the introduction of a skilled, neutral third party is equally important. For intense conflict, we advocate for early recognition, comfort, and preparedness through training of clinicians in de-escalation and optimal communication, along with the use of more formally trained third-party mediators, as required.

  12. Intensity standardisation of 7T MR images for intensity-based segmentation of the human hypothalamus.

    PubMed

    Schindler, Stephanie; Schreiber, Jan; Bazin, Pierre-Louis; Trampel, Robert; Anwander, Alfred; Geyer, Stefan; Schönknecht, Peter

    2017-01-01

    The high spatial resolution of 7T MRI enables us to identify subtle volume changes in brain structures, providing potential biomarkers of mental disorders. Most volumetric approaches require that similar intensity values represent similar tissue types across different persons. By applying colour-coding to T1-weighted MP2RAGE images, we found that the high measurement accuracy achieved by high-resolution imaging may be compromised by inter-individual variations in the image intensity. To address this issue, we analysed the performance of five intensity standardisation techniques in high-resolution T1-weighted MP2RAGE images. Twenty images with extreme intensities in the GM and WM were standardised to a representative reference image. We performed a multi-level evaluation with a focus on the hypothalamic region-analysing the intensity histograms as well as the actual MR images, and requiring that the correlation between the whole-brain tissue volumes and subject age be preserved during standardisation. The results were compared with T1 maps. Linear standardisation using subcortical ROIs of GM and WM provided good results for all evaluation criteria: it improved the histogram alignment within the ROIs and the average image intensity within the ROIs and the whole-brain GM and WM areas. This method reduced the inter-individual intensity variation of the hypothalamic boundary by more than half, outperforming all other methods, and kept the original correlation between the GM volume and subject age intact. Mixed results were obtained for the other four methods, which sometimes came at the expense of unwarranted changes in the age-related pattern of the GM volume. The mapping of the T1 relaxation time with the MP2RAGE sequence is advertised as being especially robust to bias field inhomogeneity. We found little evidence that substantiated the T1 map's theoretical superiority over the T1-weighted images regarding the inter-individual image intensity homogeneity.

  13. Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.

    2017-03-01

    We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.

  14. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-01-01

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  15. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-02-06

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  16. Pediatric intensive care.

    PubMed

    Macintire, D K

    1999-07-01

    To provide optimal care, a veterinarian in a pediatric intensive care situation for a puppy or kitten should be familiar with normal and abnormal vital signs, nursing care and monitoring considerations, and probable diseases. This article is a brief discussion of the pediatric intensive care commonly required to treat puppies or kittens in emergency situations and for canine parvovirus type 2 enteritis.

  17. Map projections

    USGS Publications Warehouse

    ,

    1993-01-01

    A map projection is used to portray all or part of the round Earth on a flat surface. This cannot be done without some distortion. Every projection has its own set of advantages and disadvantages. There is no "best" projection. The mapmaker must select the one best suited to the needs, reducing distortion of the most important features. Mapmakers and mathematicians have devised almost limitless ways to project the image of the globe onto paper. Scientists at the U. S. Geological Survey have designed projections for their specific needs—such as the Space Oblique Mercator, which allows mapping from satellites with little or no distortion. This document gives the key properties, characteristics, and preferred uses of many historically important projections and of those frequently used by mapmakers today.

  18. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial

  19. Genetic mapping in grapevine using a SNP microarray: intensity values

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genotyping microarrays are widely used for genome wide association studies, but in high-diversity organisms, the quality of SNP calls can be diminished by genetic variation near the assayed nucleotide. To address this limitation in grapevine, we developed a simple heuristic that uses hybridization i...

  20. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  1. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  2. High solar intensity radiometer

    NASA Technical Reports Server (NTRS)

    Jack, J. R.; Spisz, E. W.

    1972-01-01

    Silicon solar cells are used to measure visible radiant energy and radiation intensities to 20 solar constants. Future investigations are planned for up to 100 solar constants. Radiometer is small, rugged, accurate and inexpensive.

  3. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-07

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.

  4. Approach to standardizing MR image intensity scale

    NASA Astrophysics Data System (ADS)

    Nyul, Laszlo G.; Udupa, Jayaram K.

    1999-05-01

    Despite the many advantages of MR images, they lack a standard image intensity scale. MR image intensity ranges and the meaning of intensity values vary even for the same protocol (P) and the same body region (D). This causes many difficulties in image display and analysis. We propose a two-step method for standardizing the intensity scale in such a way that for the same P and D, similar intensities will have similar meanings. In the first step, the parameters of the standardizing transformation are 'learned' from an image set. In the second step, for each MR study, these parameters are used to map their histogram into the standardized histogram. The method was tested quantitatively on 90 whole brain FSE T2, PD and T1 studies of MS patients and qualitatively on several other SE PD, T2 and SPGR studies of the grain and foot. Measurements using mean squared difference showed that the standardized image intensities have statistically significantly more consistent range and meaning than the originals. Fixed windows can be established for standardized imags and used for display without the need of per case adjustment. Preliminary results also indicate that the method facilitates improving the degree of automation of image segmentation.

  5. [Intensive medicine in Spain].

    PubMed

    2011-03-01

    Intensive care medicine is a medical specialty that was officially established in our country in 1978, with a 5-year training program including two years of common core training followed by three years of specific training in an intensive care unit accredited for training. During this 32-year period, intensive care medicine has carried out an intense and varied activity, which has allowed its positioning as an attractive and with future specialty in the hospital setting. This document summarizes the history of the specialty, its current situation, the key role played in the programs of organ donation and transplantation of the National Transplant Organization (after more than 20 years of mutual collaboration), its training activities with the development of the National Plan of Cardiopulmonary Resuscitation, with a trajectory of more than 25 years, its interest in providing care based on quality and safety programs for the severely ill patient. It also describes the development of reference registries due to the need for reliable data on the care process for the most prevalent diseases, such as ischemic heart disease or ICU-acquired infections, based on long-term experience (more than 15 years), which results in the availability of epidemiological information and characteristics of care that may affect the practical patient's care. Moreover, features of its scientific society (SEMICYUC) are reported, an organization that agglutinates the interests of more than 280 ICUs and more than 2700 intensivists, with reference to the journal Medicina Intensiva, the official journal of the society and the Panamerican and Iberian Federation of Critical Medicine and Intensive Care Societies. Medicina Intensiva is indexed in the Thompson Reuters products of Science Citation Index Expanded (Scisearch(®)) and Journal Citation Reports, Science Edition. The important contribution of the Spanish intensive care medicine to the scientific community is also analyzed, and in relation to

  6. Seismicity map of the state of Georgia

    USGS Publications Warehouse

    Reagor, B. Glen; Stover, C.W.; Algermissen, S.T.; Long, L.T.

    1991-01-01

    This map is one of a series of seismicity maps produced by the U.S. Geological Survey that show earthquake data of individual states or groups of states at the scale of 1:1,000,000. This maps shows only those earthquakes with epicenters located within the boundaries of Georgia, even though earthquakes in nearby states or countries may have been felt or may have cause damage in Georgia. The data in table 1 were used to compile the seismicity map; these data are a corrected, expanded, and updated (through 1987) version of the data used by Algermissen (1969) for a study of seismic risk in the United States. The locations and intensities of some earthquakes were revised and intensities were assigned where none had been before. Many earthquakes were added to the original list from new data sources as well as from some old data sources that has not been previously used. The data in table 1 represent best estimates of the location of the epicenter, magnitude, and intensity of each earthquake on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed, but not all, especially for earthquakes that occurred before seismic instruments were universally used. The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the Arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercoili intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location, The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  7. High intensity neutrino beams

    SciTech Connect

    Ichikawa, A. K.

    2015-07-15

    High-intensity proton accelerator complex enabled long baseline neutrino oscillation experiments with a precisely controlled neutrino beam. The beam power so far achieved is a few hundred kW with enourmorous efforts of accelerator physicists and engineers. However, to fully understand the lepton mixing structure, MW-class accelerators are desired. We describe the current intensity-frontier high-energy proton accelerators, their plans to go beyond and technical challenges in the neutrino beamline facilities.

  8. The National Map: from geography to mapping and back again

    USGS Publications Warehouse

    Kelmelis, John A.; DeMulder, Mark L.; Ogrosky, Charles E.; Van Driel, J. Nicholas; Ryan, Barbara J.

    2003-01-01

    When the means of production for national base mapping were capital intensive, required large production facilities, and had ill-defined markets, Federal Government mapping agencies were the primary providers of the spatial data needed for economic development, environmental management, and national defense. With desktop geographic information systems now ubiquitous, source data available as a commodity from private industry, and the realization that many complex problems faced by society need far more and different kinds of spatial data for their solutions, national mapping organizations must realign their business strategies to meet growing demand and anticipate the needs of a rapidly changing geographic information environment. The National Map of the United States builds on a sound historic foundation of describing and monitoring the land surface and adds a focused effort to produce improved understanding, modeling, and prediction of land-surface change. These added dimensions bring to bear a broader spectrum of geographic science to address extant and emerging issues. Within the overarching construct of The National Map, the U.S. Geological Survey (USGS) is making a transition from data collector to guarantor of national data completeness; from producing paper maps to supporting an online, seamless, integrated database; and from simply describing the Nation’s landscape to linking these descriptions with increased scientific understanding. Implementing the full spectrum of geographic science addresses a myriad of public policy issues, including land and natural resource management, recreation, urban growth, human health, and emergency planning, response, and recovery. Neither these issues nor the science and technologies needed to deal with them are static. A robust research agenda is needed to understand these changes and realize The National Map vision. Initial successes have been achieved. These accomplishments demonstrate the utility of

  9. Rainfall intensity-duration conditions for mass movements in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Chi-Wen; Saito, Hitoshi; Oguchi, Takashi

    2015-12-01

    Mass movements caused by rainfall events in Taiwan are analyzed during a 7-year period from 2006 to 2012. Data from the Taiwan Soil and Water Conservation Bureau reports were compiled for 263 mass movement events, including 156 landslides, 91 debris flows, and 16 events with both landslides and debris flows. Rainfall totals for each site location were obtained from interpolated rain gauge data. The rainfall intensity-duration ( I-D) relationship was examined to establish a rainfall threshold for mass movements using random sampling: I = 18.10(±2.67) D -0.17(±0.04), where I is mean rainfall intensity (mm/h) and D is the time (h) between the beginning of a rainfall event and the resulting mass movement. Significant differences were found between rainfall intensities and thresholds for landslides and debris flows. For short-duration rainfall events, higher mean rainfall intensities were required to trigger debris flows. In contrast, for long-duration rainfall events, similar mean rainfall intensities triggered both landslides and debris flows. Mean rainfall intensity was rescaled by mean annual precipitation (MAP) to define a new threshold: I MAP = 0.0060(±0.0009) D -0.17(±0.04), where I MAP is rescaled rainfall intensity and MAP is the minimum for mountainous areas in Taiwan (3000 mm). Although the I-D threshold for Taiwan is high, the I MAP -D threshold for Taiwan tends to be low relative to other areas around the world. Our results indicate that Taiwan is highly prone to rainfall-induced mass movements. This study also shows that most mass movements occur in high rainfall-intensity periods, but some events occur before or after the rainfall peak. Both antecedent and peak rainfall play important roles in triggering landslides, whereas debris flow occurrence is more related to peak rainfall than antecedent rainfall.

  10. Intensity standardisation of 7T MR images for intensity-based segmentation of the human hypothalamus

    PubMed Central

    Schreiber, Jan; Bazin, Pierre-Louis; Trampel, Robert; Anwander, Alfred; Geyer, Stefan; Schönknecht, Peter

    2017-01-01

    The high spatial resolution of 7T MRI enables us to identify subtle volume changes in brain structures, providing potential biomarkers of mental disorders. Most volumetric approaches require that similar intensity values represent similar tissue types across different persons. By applying colour-coding to T1-weighted MP2RAGE images, we found that the high measurement accuracy achieved by high-resolution imaging may be compromised by inter-individual variations in the image intensity. To address this issue, we analysed the performance of five intensity standardisation techniques in high-resolution T1-weighted MP2RAGE images. Twenty images with extreme intensities in the GM and WM were standardised to a representative reference image. We performed a multi-level evaluation with a focus on the hypothalamic region—analysing the intensity histograms as well as the actual MR images, and requiring that the correlation between the whole-brain tissue volumes and subject age be preserved during standardisation. The results were compared with T1 maps. Linear standardisation using subcortical ROIs of GM and WM provided good results for all evaluation criteria: it improved the histogram alignment within the ROIs and the average image intensity within the ROIs and the whole-brain GM and WM areas. This method reduced the inter-individual intensity variation of the hypothalamic boundary by more than half, outperforming all other methods, and kept the original correlation between the GM volume and subject age intact. Mixed results were obtained for the other four methods, which sometimes came at the expense of unwarranted changes in the age-related pattern of the GM volume. The mapping of the T1 relaxation time with the MP2RAGE sequence is advertised as being especially robust to bias field inhomogeneity. We found little evidence that substantiated the T1 map’s theoretical superiority over the T1-weighted images regarding the inter-individual image intensity homogeneity. PMID

  11. Error-detective one-dimensional mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Zhou, Shihao

    2017-02-01

    The 1-D mapping is an intensity-based method used to estimate a projective transformation between two images. However, it lacks an intensity-invariant criterion for deciding whether two images can be aligned or not. The paper proposes a novel decision criterion and, thus, develops an error-detective 1-D mapping method. First, a multiple 1-D mapping scheme is devised for yielding redundant estimates of an image transformation. Then, a voting scheme is proposed for verifying these multiple estimates, in which at least one estimate without receiving all the votes is taken as a decision criterion for false-match rejection. Based on the decision criterion, an error-detective 1-D mapping algorithm is also constructed. Finally, the proposed algorithm is evaluated in registering real image pairs with a large range of projective transformations.

  12. Intensity Conserving Spectral Fitting

    NASA Astrophysics Data System (ADS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2016-01-01

    The detailed shapes of spectral-line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. We have developed an iterative procedure that corrects for this effect. It converges rapidly and is very flexible in that it can be used with any fitting function. We present examples of cubic-spline and Gaussian fits and give special attention to measurements of blue-red asymmetries of coronal emission lines.

  13. High intensity hadron accelerators

    SciTech Connect

    Teng, L.C.

    1989-05-01

    This rapporteur report consists mainly of two parts. Part I is an abridged review of the status of all High Intensity Hadron Accelerator projects in the world in semi-tabulated form for quick reference and comparison. Part II is a brief discussion of the salient features of the different technologies involved. The discussion is based mainly on my personal experiences and opinions, tempered, I hope, by the discussions I participated in in the various parallel sessions of the workshop. In addition, appended at the end is my evaluation and expression of the merits of high intensity hadron accelerators as research facilities for nuclear and particle physics.

  14. Defect mapping system

    DOEpatents

    Sopori, Bhushan L.

    1995-01-01

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities.

  15. Defect mapping system

    DOEpatents

    Sopori, B.L.

    1995-04-11

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities. 20 figures.

  16. Diffuse gamma radiation. [intensity, energy spectrum and spatial distribution from SAS 2 observations

    NASA Technical Reports Server (NTRS)

    Fichtel, C. E.; Simpson, G. A.; Thompson, D. J.

    1978-01-01

    Results are reported for an investigation of the intensity, energy spectrum, and spatial distribution of the diffuse gamma radiation detected by SAS 2 away from the galactic plane in the energy range above 35 MeV. The gamma-ray data are compared with relevant data obtained at other wavelengths, including 21-cm emission, radio continuum radiation, and the limited UV and radio information on local molecular hydrogen. It is found that there are two quite distinct components to the diffuse radiation, one of which shows a good correlation with the galactic matter distribution and continuum radiation, while the other has a much steeper energy spectrum and appears to be isotropic at least on a coarse scale. The galactic component is interpreted in terms of its implications for both local and more distant regions of the Galaxy. The apparently isotropic radiation is discussed partly with regard to the constraints placed on possible models by the steep energy spectrum, the observed intensity, and an upper limit on the anisotropy.

  17. INTERPRETING THE UNRESOLVED INTENSITY OF COSMOLOGICALLY REDSHIFTED LINE RADIATION

    SciTech Connect

    Switzer, E. R.; Chang, T.-C.; Pen, U.-L.; Voytek, T. C.

    2015-12-10

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically ∼10{sup 2}–10{sup 3} times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of foregrounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  18. Interpreting The Unresolved Intensity Of Cosmologically Redshifted Line Radiation

    NASA Technical Reports Server (NTRS)

    Switzer, E. R.; Chang, T.-C.; Masui, K. W.; Pen, U.-L.; Voytek, T. C.

    2016-01-01

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically approximately 10(sup 2)-10(Sup 3) times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of fore-grounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  19. Seismicity map of the state of Arizona

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1986-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. The Roman numeral to the left of the triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of al 1 earthquakes with epicenters at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  20. Seismicity map of the state of Vermont

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Highland, L.M.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the 1eft of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  1. Seismicity map of the state of Idaho

    USGS Publications Warehouse

    Stover, Carl W.; Reagor, B.G.; Algermissen, S.T.

    1991-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the Arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  2. Seismicity map of the state of Ohio

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted . These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  3. Seismicity map of the state of Indiana

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  4. Intensive Vocabulary Training.

    ERIC Educational Resources Information Center

    Jackson, Jeanne R.; Dizney, Henry

    1963-01-01

    This study evaluated effects of a year-long intensive vocabulary program on the reading achievement of 12th-grade college-preparatory English students. A control class followed the regular course of study, and an experimental class supplemented it with completion of the "Harbrace Vocabulary Workshop" workbook, study of the use of footnotes and the…

  5. GHIGLS: H I Mapping at Intermediate Galactic Latitude Using the Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Martin, P. G.; Blagrave, K. P. M.; Lockman, Felix J.; Pinheiro Gonçalves, D.; Boothroyd, A. I.; Joncas, G.; Miville-Deschênes, M.-A.; Stephan, G.

    2015-08-01

    This paper introduces and describes the data cubes from GHIGLS, deep Green Bank Telescope (GBT) surveys of the 21 cm line emission of H i in 37 targeted fields at intermediate Galactic latitude. The GHIGLS fields together cover over 1000 deg2 at 9\\buildrel{ \\prime}\\over{.} 55 spatial resolution. The H i spectra have an effective velocity resolution of about 1.0 km s-1 and cover at least -450\\lt {v}{LSR}\\lt +250 km s-1, extending to {v}{LSR}\\lt +450 km s-1 for most fields. As illustrated with various visualizations of the H i data cubes, GHIGLS highlights that even at intermediate Galactic latitude the interstellar medium is very complex. Spatial structure of the H i is quantified through power spectra of maps of the integrated line emission or column density, {N}{{H} {{I}}}. For our featured representative field, centered on the north ecliptic pole, the scaling exponents in power-law representations of the power spectra of {N}{{H} {{I}}} maps for low-, intermediate-, and high-velocity gas components (LVC, IVC, and HVC) are -2.86+/- 0.04, -2.69+/- 0.04, and -2.59+/- 0.07, respectively. After Gaussian decomposition of the line profiles, {N}{{H} {{I}}} maps were also made corresponding to the narrow-line and broad-line components in the LVC range; for the narrow-line map the exponent is -1.9+/- 0.1, reflecting more small-scale structure in the cold neutral medium (CNM). There is evidence that filamentary structure in the H i CNM is oriented parallel to the Galactic magnetic field. The power spectrum analysis also offers insight into the various contributions to uncertainty in the data, yielding values close to those obtained using diagnostics developed in our earlier independent analysis. The effect of 21 cm line opacity on the GHIGLS {N}{{H} {{I}}} maps is estimated. Comparisons of the GBT data in a few of the GHIGLS fields with data from the EBHIS and GASS surveys explore potential issues in data reduction and calibration and reveal good agreement. The high

  6. Human Mind Maps

    ERIC Educational Resources Information Center

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  7. Applying Performance Models to Understand Data-Intensive Computing Efficiency

    DTIC Science & Technology

    2010-05-01

    data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local

  8. Remotely Sensed Tropical Cyclone Structure/Intensity Changes

    DTIC Science & Technology

    2016-06-07

    tc_home.html LONG-TERM GOALS Routinely map the life cycle of a tropical cyclone’s (TC) three-dimensional (3-D) structure and intensity changes via...based Dvorak intensity estimates typically can’t handle eyewall cycles and rapid intensification since IR data frequently has no clue eyewall change...dynamics and eyewall cycle processes. The “microwave constellation” will permit us to understand much of the temporal changes each TC undergoes even

  9. Concept Mapping

    PubMed Central

    Brennan, Laura K.; Brownson, Ross C.; Kelly, Cheryl; Ivey, Melissa K.; Leviton, Laura C.

    2016-01-01

    Background From 2003 to 2008, 25 cross-sector, multidisciplinary community partnerships funded through the Active Living by Design (ALbD) national program designed, planned, and implemented policy and environmental changes, with complementary programs and promotions. This paper describes the use of concept-mapping methods to gain insights into promising active living intervention strategies based on the collective experience of community representatives implementing ALbD initiatives. Methods Using Concept Systems software, community representatives (n=43) anonymously generated actions and changes in their communities to support active living (183 original statements, 79 condensed statements). Next, respondents (n=26, from 23 partnerships) sorted the 79 statements into self-created categories, or active living intervention approaches. Respondents then rated statements based on their perceptions of the most important strategies for creating community changes (n=25, from 22 partnerships) and increasing community rates of physical activity (n=23, from 20 partnerships). Cluster analysis and multidimensional scaling were used to describe data patterns. Results ALbD community partnerships identified three active living intervention approaches with the greatest perceived importance to create community change and increase population levels of physical activity: changes to the built and natural environment, partnership and collaboration efforts, and land-use and transportation policies. The relative importance of intervention approaches varied according to subgroups of partnerships working with different populations. Conclusions Decision makers, practitioners, and community residents can incorporate what has been learned from the 25 community partnerships to prioritize active living policy, physical project, promotional, and programmatic strategies for work in different populations and settings. PMID:23079266

  10. Variable Sampling Mapping

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey, S.; Aronstein, David L.; Dean, Bruce H.; Lyon, Richard G.

    2012-01-01

    The performance of an optical system (for example, a telescope) is limited by the misalignments and manufacturing imperfections of the optical elements in the system. The impact of these misalignments and imperfections can be quantified by the phase variations imparted on light traveling through the system. Phase retrieval is a methodology for determining these variations. Phase retrieval uses images taken with the optical system and using a light source of known shape and characteristics. Unlike interferometric methods, which require an optical reference for comparison, and unlike Shack-Hartmann wavefront sensors that require special optical hardware at the optical system's exit pupil, phase retrieval is an in situ, image-based method for determining the phase variations of light at the system s exit pupil. Phase retrieval can be used both as an optical metrology tool (during fabrication of optical surfaces and assembly of optical systems) and as a sensor used in active, closed-loop control of an optical system, to optimize performance. One class of phase-retrieval algorithms is the iterative transform algorithm (ITA). ITAs estimate the phase variations by iteratively enforcing known constraints in the exit pupil and at the detector, determined from modeled or measured data. The Variable Sampling Mapping (VSM) technique is a new method for enforcing these constraints in ITAs. VSM is an open framework for addressing a wide range of issues that have previously been considered detrimental to high-accuracy phase retrieval, including undersampled images, broadband illumination, images taken at or near best focus, chromatic aberrations, jitter or vibration of the optical system or detector, and dead or noisy detector pixels. The VSM is a model-to-data mapping procedure. In VSM, fully sampled electric fields at multiple wavelengths are modeled inside the phase-retrieval algorithm, and then these fields are mapped to intensities on the light detector, using the properties

  11. Maps & minds : mapping through the ages

    USGS Publications Warehouse

    ,

    1984-01-01

    Throughout time, maps have expressed our understanding of our world. Human affairs have been influenced strongly by the quality of maps available to us at the major turning points in our history. "Maps & Minds" traces the ebb and flow of a few central ideas in the mainstream of mapping. Our expanding knowledge of our cosmic neighborhood stems largely from a small number of simple but grand ideas, vigorously pursued.

  12. Mapping: A Course.

    ERIC Educational Resources Information Center

    Whitmore, Paul M.

    1988-01-01

    Reviews the history of cartography. Describes the contributions of Strabo and Ptolemy in early maps. Identifies the work of Gerhard Mercator as the most important advancement in mapping. Discusses present mapping standards from history. (CW)

  13. Intense fusion neutron sources

    NASA Astrophysics Data System (ADS)

    Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.

    2010-04-01

    The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 1015-1021 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 1020 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.

  14. NEUTRON FLUX INTENSITY DETECTION

    DOEpatents

    Russell, J.T.

    1964-04-21

    A method of measuring the instantaneous intensity of neutron flux in the core of a nuclear reactor is described. A target gas capable of being transmuted by neutron bombardment to a product having a resonance absorption line nt a particular microwave frequency is passed through the core of the reactor. Frequency-modulated microwave energy is passed through the target gas and the attenuation of the energy due to the formation of the transmuted product is measured. (AEC)

  15. Intense ion beam generator

    DOEpatents

    Humphries, Jr., Stanley; Sudan, Ravindra N.

    1977-08-30

    Methods and apparatus for producing intense megavolt ion beams are disclosed. In one embodiment, a reflex triode-type pulsed ion accelerator is described which produces ion pulses of more than 5 kiloamperes current with a peak energy of 3 MeV. In other embodiments, the device is constructed so as to focus the beam of ions for high concentration and ease of extraction, and magnetic insulation is provided to increase the efficiency of operation.

  16. Water intensity of transportation.

    PubMed

    King, Carey W; Webber, Michael E

    2008-11-01

    As the need for alternative transportation fuels increases, it is important to understand the many effects of introducing fuels based upon feedstocks other than petroleum. Water intensity in "gallons of water per mile traveled" is one method to measure these effects on the consumer level. In this paper we investigate the water intensity for light duty vehicle (LDV) travel using selected fuels based upon petroleum, natural gas, unconventional fossil fuels, hydrogen, electricity, and two biofuels (ethanol from corn and biodiesel from soy). Fuels more directly derived from fossil fuels are less water intensive than those derived either indirectly from fossil fuels (e.g., through electricity generation) or directly from biomass. The lowest water consumptive (<0.15 gal H20/mile) and withdrawal (<1 gal H2O/mile) rates are for LDVs using conventional petroleum-based gasoline and diesel, nonirrigated biofuels, hydrogen derived from methane or electrolysis via nonthermal renewable electricity, and electricity derived from nonthermal renewable sources. LDVs running on electricity and hydrogen derived from the aggregate U.S. grid (heavily based upon fossil fuel and nuclear steam-electric power generation) withdraw 5-20 times and consume nearly 2-5 times more water than by using petroleum gasoline. The water intensities (gal H20/mile) of LDVs operating on biofuels derived from crops irrigated in the United States at average rates is 28 and 36 for corn ethanol (E85) for consumption and withdrawal, respectively. For soy-derived biodiesel the average consumption and withdrawal rates are 8 and 10 gal H2O/mile.

  17. Optimization of Isocenter Location for Intensity Modulated Stereotactic Treatment of Small Intracranial Targets

    SciTech Connect

    Salter, Bill J. Fuss, Martin; Sarkar, Vikren; Wang, Brian; Rassiah-Szegedi, Prema; Papanikolaou, Niko; Hollingshaus, Scott; Shrieve, Dennis C.

    2009-02-01

    Purpose: To quantify the impact of isocenter location on treatment plan quality for intensity-modulated stereotactic treatment of small intracranial lesions. Methods and Materials: For 18 patients previously treated by stereotactic-intensity modulated radiosurgery (IMRS) or intensity-modulated radiation therapy (IMRT), a retrospective virtual planning study was conducted wherein the impact of isocenter location on plan quality was measured. Treatment indications studied included six arteriovenous malformations, six acoustic neuromas, and six intracranial metastases, ranging in volume from 0.71 to 3.21 cm{sup 3} (mean = 2.26 cm{sup 3}), 1.08 to 2.84 cm{sup 3} (mean = 1.73 cm{sup 3}), and 0.19 to 2.30 cm{sup 3} (mean = 0.79 cm{sup 3}), respectively. Variation of isocenter location causes the geometric grid of pencil beams into which the target is segmented for intensity-modulated treatment to be altered. The impact of this pencil-beam-grid redefinition on achievable conformity index was quantified for three collimators (Varian Millennium 120; BrainLab MM3; Nomos binary Mimic) and three treatment planning systems (TPS; Varian Eclipse v6.5; BrainLab BrainScan v5.31; Best-Nomos Corvus v6.2), resulting in the evaluation of 3,446 treatment plans. Results: For all patients, collimator, and TPS combinations studied, a significant variation in plan quality was observed as a function of isocenter and pencil-beam-grid relocation. Optimization of isocenter location resulted in treatment plan conformity variations as large as 109% (min = 15%, mean = 51%, max = 109%). Conclusion: Optimization of isocenter location for IMRT/IMRS treatment of small intracranial lesions in which pencil-beam dimensions are comparable to target dimensions, can result in significant improvements in treatment plan quality.

  18. Airborne infrared mineral mapping survey of Marysvale, Utah

    NASA Technical Reports Server (NTRS)

    Collins, W.; Chang, S. H.

    1982-01-01

    Infrared spectroradiometer survey results from flights over the Marysvale, Utah district show that hydrothermal alteration mineralogy can be mapped using very rapid and effective airborne techniques. The system detects alteration mineral absorption band intensities in the infrared spectral region with high sensitivity. The higher resolution spectral features and high spectral differences characteristic of the various clay and carbonate minerals are also readily identified by the instrument allowing the mineralogy to be mapped as well as the mineralization intensity.

  19. Stellar Temporal Intensity Interferometry

    NASA Astrophysics Data System (ADS)

    Kian, Tan Peng

    Stellar intensity interferometry was developed by Hanbury-Brown & Twiss [1954, 1956b, 1957, 1958] to bypass the diffraction limit of telescope apertures, with successful measurements including the determination of 32 stellar angular diameters using the Narrabri Stellar Intensity Interferometer [Hanbury-Brown et al., 1974]. This was achieved by measuring the intensity correlations between starlight received by a pair of telescopes separated by varying baselines b which, by invoking the van Cittert-Zernicke theorem [van Cittert, 1934; Zernicke, 1938], are related to the angular intensity distributions of the stellar light sources through a Fourier transformation of the equal-time complex degree of coherence gamma(b) between the two telescopes. This intensity correlation, or the second order correlation function g(2) [Glauber, 1963], can be described in terms of two-photoevent coincidence measurements [Hanbury-Brown, 1974] for our use of photon-counting detectors. The application of intensity interferometry in astrophysics has been largely restricted to the spatial domain but not found widespread adoption due to limitations by its signal-to-noise ratio [Davis et al., 1999; Foellmi, 2009; Jensen et al., 2010; LeBohec et al., 2008, 2010], although there is a growing movement to revive its use [Barbieri et al., 2009; Capraro et al., 2009; Dravins & Lagadec, 2014; Dravins et al., 2015; Dravins & LeBohec, 2007]. In this thesis, stellar intensity interferometry in the temporal domain is investigated instead. We present a narrowband spectral filtering scheme [Tan et al., 2014] that allows direct measurements of the Lorentzian temporal correlations, or photon bunching, from the Sun, with the preliminary Solar g(2)(tau = 0) = 1.3 +/- 0.1, limited mostly by the photon detector response [Ghioni et al., 2008], compared to the theoretical value of g(2)(0) = 2. The measured temporal photon bunching signature of the Sun exceeded the previous records of g(2)(0) = 1.03 [Karmakar et al

  20. Intense near-infrared emission of 1.23 μm in erbium-doped low-phonon-energy fluorotellurite glass.

    PubMed

    Zhou, Bo; Tao, Lili; Chan, Clarence Yat-Yin; Tsang, Yuen Hong; Jin, Wei; Pun, Edwin Yue-Bun

    2013-07-01

    Intense near-infrared emission located at 1.23 μm wavelength originating from the erbium (Er(3+)):(4)S3/2→(4)I11/2 transition is observed in Er(3+)-doped fluorotellurite glasses. This emission is mainly contributed by the relatively low phonon energy of the fluorotellurite glass host (~776 cm(-1)). Judd-Ofelt analysis indicates a strong asymmetry and covalent environment between Er(3+) ions and ligands in the host matrix. The emission cross-section was calculated to be 2.85×10(-21) cm(2) by the Füchtbauer-Ladenburg equation, and the population inversion is realized according to a simplified evaluation. The results suggest that the fluorotellurite glass system could be a promising candidate for the development of optical amplifiers and lasers operating at the relatively unexplored 1.2 μm wavelength region.

  1. Intense near-infrared emission of 1.23 μm in erbium-doped low-phonon-energy fluorotellurite glass

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Tao, Lili; Yat-Yin Chan, Clarence; Tsang, Yuen Hong; Jin, Wei; Pun, Edwin Yue-Bun

    2013-07-01

    Intense near-infrared emission located at 1.23 μm wavelength originating from the erbium (Er3+):4S3/2 → 4I11/2 transition is observed in Er3+-doped fluorotellurite glasses. This emission is mainly contributed by the relatively low phonon energy of the fluorotellurite glass host (˜776 cm-1). Judd-Ofelt analysis indicates a strong asymmetry and covalent environment between Er3+ ions and ligands in the host matrix. The emission cross-section was calculated to be 2.85 × 10-21 cm2 by the Füchtbauer-Ladenburg equation, and the population inversion is realized according to a simplified evaluation. The results suggest that the fluorotellurite glass system could be a promising candidate for the development of optical amplifiers and lasers operating at the relatively unexplored 1.2 μm wavelength region.

  2. Depth Map Restoration From Undersampled Data.

    PubMed

    Mandal, Srimanta; Bhavsar, Arnav; Sao, Anil Kumar

    2017-01-01

    Depth map sensed by low-cost active sensor is often limited in resolution, whereas depth information achieved from structure from motion or sparse depth scanning techniques may result in a sparse point cloud. Achieving a high-resolution (HR) depth map from a low resolution (LR) depth map or densely reconstructing a sparse non-uniformly sampled depth map are fundamentally similar problems with different types of upsampling requirements. The first problem involves upsampling in a uniform grid, whereas the second type of problem requires an upsampling in a non-uniform grid. In this paper, we propose a new approach to address such issues in a unified framework, based on sparse representation. Unlike, most of the approaches of depth map restoration, our approach does not require an HR intensity image. Based on example depth maps, sub-dictionaries of exemplars are constructed, and are used to restore HR/dense depth map. In the case of uniform upsampling of LR depth map, an edge preserving constraint is used for preserving the discontinuity present in the depth map, and a pyramidal reconstruction strategy is applied in order to deal with higher upsampling factors. For upsampling of non-uniformly sampled sparse depth map, we compute the missing information in local patches from that from similar exemplars. Furthermore, we also suggest an alternative method of reconstructing dense depth map from very sparse non-uniformly sampled depth data by sequential cascading of uniform and non-uniform upsampling techniques. We provide a variety of qualitative and quantitative results to demonstrate the efficacy of our approach for depth map restoration.

  3. Mapping of wildlife habitat in Farmington Bay, Utah

    NASA Technical Reports Server (NTRS)

    Jaynes, R. A.; Willie, R. D. (Principal Investigator)

    1982-01-01

    Mapping was accomplished through the interpretation of high-altitude color infrared photography. The feasibility of utilizing LANDSAT digital data to augment the analysis was explored; complex patterns of wildlife habitat and confusion of spectral classes resulted in the decision to make limited use of LANDSAT data in the analysis. The final product is a map which delineates wildlife habitat at a scale of 1:24,000. The map is registered to and printed on a screened U.S.G.S. quadrangle base map. Screened delineations of shoreline contours, mapped from a previous study, are also shown on the map. Intensive field checking of the map was accomplished for the Farmington Bay Waterfowl Management Area in August 1981; other areas on the map received only spot field checking.

  4. Intensity modulated proton therapy

    PubMed Central

    Grassberger, C

    2015-01-01

    Intensity modulated proton therapy (IMPT) implies the electromagnetic spatial control of well-circumscribed “pencil beams” of protons of variable energy and intensity. Proton pencil beams take advantage of the charged-particle Bragg peak—the characteristic peak of dose at the end of range—combined with the modulation of pencil beam variables to create target-local modulations in dose that achieves the dose objectives. IMPT improves on X-ray intensity modulated beams (intensity modulated radiotherapy or volumetric modulated arc therapy) with dose modulation along the beam axis as well as lateral, in-field, dose modulation. The clinical practice of IMPT further improves the healthy tissue vs target dose differential in comparison with X-rays and thus allows increased target dose with dose reduction elsewhere. In addition, heavy-charged-particle beams allow for the modulation of biological effects, which is of active interest in combination with dose “painting” within a target. The clinical utilization of IMPT is actively pursued but technical, physical and clinical questions remain. Technical questions pertain to control processes for manipulating pencil beams from the creation of the proton beam to delivery within the patient within the accuracy requirement. Physical questions pertain to the interplay between the proton penetration and variations between planned and actual patient anatomical representation and the intrinsic uncertainty in tissue stopping powers (the measure of energy loss per unit distance). Clinical questions remain concerning the impact and management of the technical and physical questions within the context of the daily treatment delivery, the clinical benefit of IMPT and the biological response differential compared with X-rays against which clinical benefit will be judged. It is expected that IMPT will replace other modes of proton field delivery. Proton radiotherapy, since its first practice 50 years ago, always required the

  5. THE 21 cm 'OUTER ARM' AND THE OUTER-GALAXY HIGH-VELOCITY CLOUDS: CONNECTED BY KINEMATICS, METALLICITY, AND DISTANCE

    SciTech Connect

    Tripp, Todd M.; Song Limin

    2012-02-20

    Using high-resolution ultraviolet spectra obtained with the Hubble Space Telescope Space Telescope Imaging Spectrograph and the Far Ultraviolet Spectroscopic Explorer, we study the metallicity, kinematics, and distance of the gaseous 'outer arm' (OA) and the high-velocity clouds (HVCs) in the outer Galaxy. We detect the OA in a variety of absorption lines toward two QSOs, H1821+643 and HS0624+6907. We search for OA absorption toward eight Galactic stars and detect it in one case, which constrains the OA Galactocentric radius to 9 kpc

  6. Human Prostate Cancer Hallmarks Map.

    PubMed

    Datta, Dipamoy; Aftabuddin, Md; Gupta, Dinesh Kumar; Raha, Sanghamitra; Sen, Prosenjit

    2016-08-01

    Human prostate cancer is a complex heterogeneous disease that mainly affects elder male population of the western world with a high rate of mortality. Acquisitions of diverse sets of hallmark capabilities along with an aberrant functioning of androgen receptor signaling are the central driving forces behind prostatic tumorigenesis and its transition into metastatic castration resistant disease. These hallmark capabilities arise due to an intense orchestration of several crucial factors, including deregulation of vital cell physiological processes, inactivation of tumor suppressive activity and disruption of prostate gland specific cellular homeostasis. The molecular complexity and redundancy of oncoproteins signaling in prostate cancer demands for concurrent inhibition of multiple hallmark associated pathways. By an extensive manual curation of the published biomedical literature, we have developed Human Prostate Cancer Hallmarks Map (HPCHM), an onco-functional atlas of human prostate cancer associated signaling and events. It explores molecular architecture of prostate cancer signaling at various levels, namely key protein components, molecular connectivity map, oncogenic signaling pathway map, pathway based functional connectivity map etc. Here, we briefly represent the systems level understanding of the molecular mechanisms associated with prostate tumorigenesis by considering each and individual molecular and cell biological events of this disease process.

  7. Ground-Based Sensing System for Weed Mapping in Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A ground-based weed mapping system was developed to measure weed intensity and distribution in a cotton field. The weed mapping system includes WeedSeeker® PhD600 sensor modules to indicate the presence of weeds between rows, a GPS receiver to provide spatial information, and a data acquisition and ...

  8. Depletion of intense fields

    NASA Astrophysics Data System (ADS)

    Bulanov, S. S.; Seipt, D.; Heinzl, T.; Marklund, M.

    2017-03-01

    The problem of backreaction of quantum processes on the properties of the background field still remains on the list of outstanding questions of high intensity particle physics. Usually, photon emission by an electron or positron, photon decay into electron-positron pairs in strong electromagnetic fields, or electron-positron pair production by such fields are described in the framework of the external field approximation. It is assumed that the external field has infinite energy and is not affected by these processes. However, the above-mentioned processes have a multi-photon nature, i.e., they occur with the absorption of a significant number of field photons. As a result, the interaction of an intense electromagnetic field with either a highly charged electron bunch or a fast growing population of electrons, positrons, and gamma photons (as in the case of an electromagnetic cascade) may lead to a depletion of the field energy, thus making the external field approximation invalid. Taking the multi-photon Compton process as an example, we estimate the threshold of depletion and find it to become significant at field strengths (a0˜103) and electron bunch charge of about tens of nC.

  9. French intensive truck garden

    SciTech Connect

    Edwards, T D

    1983-01-01

    The French Intensive approach to truck gardening has the potential to provide substantially higher yields and lower per acre costs than do conventional farming techniques. It was the intent of this grant to show that there is the potential to accomplish the gains that the French Intensive method has to offer. It is obvious that locally grown food can greatly reduce transportation energy costs but when there is the consideration of higher efficiencies there will also be energy cost reductions due to lower fertilizer and pesticide useage. As with any farming technique, there is a substantial time interval for complete soil recovery after there have been made substantial soil modifications. There were major crop improvements even though there was such a short time since the soil had been greatly disturbed. It was also the intent of this grant to accomplish two other major objectives: first, the garden was managed under organic techniques which meant that there were no chemical fertilizers or synthetic pesticides to be used. Second, the garden was constructed so that a handicapped person in a wheelchair could manage and have a higher degree of self sufficiency with the garden. As an overall result, I would say that the garden has taken the first step of success and each year should become better.

  10. Mapping the Heart

    ERIC Educational Resources Information Center

    Hulse, Grace

    2012-01-01

    In this article, the author describes how her fourth graders made ceramic heart maps. The impetus for this project came from reading "My Map Book" by Sara Fanelli. This book is a collection of quirky, hand-drawn and collaged maps that diagram a child's world. There are maps of her stomach, her day, her family, and her heart, among others. The…

  11. Angola Seismicity MAP

    NASA Astrophysics Data System (ADS)

    Neto, F. A. P.; Franca, G.

    2014-12-01

    The purpose of this job was to study and document the Angola natural seismicity, establishment of the first database seismic data to facilitate consultation and search for information on seismic activity in the country. The study was conducted based on query reports produced by National Institute of Meteorology and Geophysics (INAMET) 1968 to 2014 with emphasis to the work presented by Moreira (1968), that defined six seismogenic zones from macro seismic data, with highlighting is Zone of Sá da Bandeira (Lubango)-Chibemba-Oncócua-Iona. This is the most important of Angola seismic zone, covering the epicentral Quihita and Iona regions, geologically characterized by transcontinental structure tectono-magmatic activation of the Mesozoic with the installation of a wide variety of intrusive rocks of ultrabasic-alkaline composition, basic and alkaline, kimberlites and carbonatites, strongly marked by intense tectonism, presenting with several faults and fractures (locally called corredor de Lucapa). The earthquake of May 9, 1948 reached intensity VI on the Mercalli-Sieberg scale (MCS) in the locality of Quihita, and seismic active of Iona January 15, 1964, the main shock hit the grade VI-VII. Although not having significant seismicity rate can not be neglected, the other five zone are: Cassongue-Ganda-Massano de Amorim; Lola-Quilengues-Caluquembe; Gago Coutinho-zone; Cuima-Cachingues-Cambândua; The Upper Zambezi zone. We also analyzed technical reports on the seismicity of the middle Kwanza produced by Hidroproekt (GAMEK) region as well as international seismic bulletins of the International Seismological Centre (ISC), United States Geological Survey (USGS), and these data served for instrumental location of the epicenters. All compiled information made possible the creation of the First datbase of seismic data for Angola, preparing the map of seismicity with the reconfirmation of the main seismic zones defined by Moreira (1968) and the identification of a new seismic

  12. National Atlas maps

    USGS Publications Warehouse

    ,

    1991-01-01

    The National Atlas of the United States of America was published by the U.S. Geological Survey in 1970. Its 765 maps and charts are on 335 14- by 19-inch pages. Many of the maps span facing pages. It's worth a quick trip to the library just to leaf through all 335 pages of this book. Rapid scanning of its thematic maps yields rich insights to the geography of issues of continuing national interest. On most maps, the geographic patterns are still valid, though the data are not current. The atlas is out of print, but many of its maps can be purchased separately. Maps that span facing pages in the atlas are printed on one sheet. The maps dated after 1970 are either revisions of original atlas maps, or new maps published in atlas format. The titles of the separate maps are listed here.

  13. Google Maps: You Are Here

    ERIC Educational Resources Information Center

    Jacobsen, Mikael

    2008-01-01

    Librarians use online mapping services such as Google Maps, MapQuest, Yahoo Maps, and others to check traffic conditions, find local businesses, and provide directions. However, few libraries are using one of Google Maps most outstanding applications, My Maps, for the creation of enhanced and interactive multimedia maps. My Maps is a simple and…

  14. An intense radiation source

    NASA Astrophysics Data System (ADS)

    Mckeown, J.; Labrie, J.-P.; Funk, L. W.

    1985-05-01

    A 10 MeV linear accelerator operating at 100% duty factor has been designed for large radiation processing applications. A beam intensity of 50 mA has the capacity to irradiate up to 1.3 MGy-Mg/h (130 Mrad-tonne/h) making it suitable for emerging applications in bulk food irradiation and waste treatment. An ability to provide high dose rate makes on-line detoxification of industrial pollutants possible. The source can compete economically with steam-based processes, such as the degradation of cellulosic materials for the production of chemicals and liquid fuels, hence new industrial applications are expected. The paper describes the main machine components, the operating characteristics and a typical application.

  15. Map reading tools for map libraries.

    USGS Publications Warehouse

    Greenberg, G.L.

    1982-01-01

    Engineers, navigators and military strategists employ a broad array of mechanical devices to facilitate map use. A larger number of map users such as educators, students, tourists, journalists, historians, politicians, economists and librarians are unaware of the available variety of tools which can be used with maps to increase the speed and efficiency of their application and interpretation. This paper identifies map reading tools such as coordinate readers, protractors, dividers, planimeters, and symbol-templets according to a functional classification. Particularly, arrays of tools are suggested for use in determining position, direction, distance, area and form (perimeter-shape-pattern-relief). -from Author

  16. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  17. Ordered restriction maps of saccharomyces cerevisiae chromosomes constructed by optical mapping

    SciTech Connect

    Schwartz, D.C.; Li, Xiaojun; Hernandez, L.I.; Ramnarain, S.P.; Huff, E.J.; Wang, Y.K. )

    1993-10-01

    A light-microscope-based technique for rapidly constructing ordered physical maps of chromosomes has been developed. Restriction enzyme digestion of elongated individual DNA molecules (about 0.2 to 1.0 megabases in size) was imaged by fluorescence microscopy after fixation in agarose gel. The size of the resulting individual restriction fragments was determined by relative fluorescence intensity and apparent molecular contour length. Ordered restriction maps were then created from genomic DNA without reliance on cloned or amplified sequences for hybridization or analytical gel electrophoresis. Initial application of optical mapping is described for Saaccharomyces cerevisiae chromosomes.

  18. The Taxonomy of Intervention Intensity

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; Fuchs, Douglas; Malone, Amelia S.

    2016-01-01

    The purpose of this article is to describe the Taxonomy of Intervention Intensity, which articulates 7 dimensions for evaluating and building intervention intensity. We explain the Taxonomy's dimensions of intensity. In explaining the Taxonomy, we rely on a case study to illustrate how the Taxonomy can systematize the process by which special…

  19. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  20. Burn Severities, Fire Intensities, and Impacts to Major Vegetation Types from the Cerro Grande Fire

    SciTech Connect

    Balice, Randy G.; Bennett, Kathryn D.; Wright, Marjorie A.

    2004-12-15

    The Cerro Grande Fire resulted in major impacts and changes to the ecosystems that were burned. To partially document these effects, we estimated the acreage of major vegetation types that were burned at selected burn severity levels and fire intensity levels. To accomplish this, we adopted independently developed burn severity and fire intensity maps, in combination with a land cover map developed for habitat management purposes, as a basis for the analysis. To provide a measure of confidence in the acreage estimates, the accuracies of these maps were also assessed. In addition, two other maps of comparable quality were assessed for accuracy: one that was developed for mapping fuel risk and a second map that resulted from a preliminary application of an evolutionary computation software system, called GENIE.

  1. Mapping Human Epigenomes

    PubMed Central

    Rivera, Chloe M.; Ren, Bing

    2013-01-01

    As the second dimension to the genome, the epigenome contains key information specific to every type of cells. Thousands of human epigenome maps have been produced in recent years thanks to rapid development of high throughput epigenome mapping technologies. In this review, we discuss the current epigenome mapping toolkit and utilities of epigenome maps. We focus particularly on mapping of DNA methylation, chromatin modification state and chromatin structures, and emphasize the use of epigenome maps to delineate human gene regulatory sequences and developmental programs. We also provide a perspective on the progress of the epigenomics field and challenges ahead. PMID:24074860

  2. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  3. Intensity Frontier Instrumentation

    SciTech Connect

    Kettell S.; Rameika, R.; Tshirhart, B.

    2013-09-24

    The fundamental origin of flavor in the Standard Model (SM) remains a mystery. Despite the roughly eighty years since Rabi asked “Who ordered that?” upon learning of the discovery of the muon, we have not understood the reason that there are three generations or, more recently, why the quark and neutrino mixing matrices and masses are so different. The solution to the flavor problem would give profound insights into physics beyond the Standard Model (BSM) and tell us about the couplings and the mass scale at which the next level of insight can be found. The SM fails to explain all observed phenomena: new interactions and yet unseen particles must exist. They may manifest themselves by causing SM reactions to differ from often very precise predictions. The Intensity Frontier (1) explores these fundamental questions by searching for new physics in extremely rare processes or those forbidden in the SM. This often requires massive and/or extremely finely tuned detectors.

  4. Emotionally Intense Science Activities

    NASA Astrophysics Data System (ADS)

    King, Donna; Ritchie, Stephen; Sandhu, Maryam; Henderson, Senka

    2015-08-01

    Science activities that evoke positive emotional responses make a difference to students' emotional experience of science. In this study, we explored 8th Grade students' discrete emotions expressed during science activities in a unit on Energy. Multiple data sources including classroom videos, interviews and emotion diaries completed at the end of each lesson were analysed to identify individual student's emotions. Results from two representative students are presented as case studies. Using a theoretical perspective drawn from theories of emotions founded in sociology, two assertions emerged. First, during the demonstration activity, students experienced the emotions of wonder and surprise; second, during a laboratory activity, students experienced the intense positive emotions of happiness/joy. Characteristics of these activities that contributed to students' positive experiences are highlighted. The study found that choosing activities that evoked strong positive emotional experiences, focused students' attention on the phenomenon they were learning, and the activities were recalled positively. Furthermore, such positive experiences may contribute to students' interest and engagement in science and longer term memorability. Finally, implications for science teachers and pre-service teacher education are suggested.

  5. Spatial variability of "Did You Feel It?" intensity data: insights into sampling biases in historical earthquake intensity distributions

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    Recent parallel development of improved quantitative methods to analyze intensity distributions for historical earthquakes and of web‐based systems for collecting intensity data for modern earthquakes provides an opportunity to reconsider not only important individual historical earthquakes but also the overall characterization of intensity distributions for historical events. The focus of this study is a comparison between intensity distributions of historical earthquakes with those from modern earthquakes for which intensities have been determined by the U.S. Geological Survey “Did You Feel It?” (DYFI) website (see Data and Resources). As an example of a historical earthquake, I focus initially on the 1843 Marked Tree, Arkansas, event. Its magnitude has been previously estimated as 6.0–6.2. I first reevaluate the macroseismic effects of this earthquake, assigning intensities using a traditional approach, and estimate a preferred magnitude of 5.4. Modified Mercalli intensity (MMI) values for the Marked Tree earthquake are higher, on average, than those from the 2011 >Mw 5.8 Mineral, Virginia, earthquake for distances ≤500  km but comparable or lower on average at larger distances, with a smaller overall felt extent. Intensity distributions for other moderate historical earthquakes reveal similar discrepancies; the discrepancy is also even more pronounced using earlier published intensities for the 1843 earthquake. I discuss several hypotheses to explain the discrepancies, including the possibility that intensity values associated with historical earthquakes are commonly inflated due to reporting/sampling biases. A detailed consideration of the DYFI intensity distribution for the Mineral earthquake illustrates how reporting and sampling biases can account for historical earthquake intensity biases as high as two intensity units and for the qualitative difference in intensity distance decays for modern versus historical events. Thus, intensity maps for

  6. Seismicity map of the State of Maine

    USGS Publications Warehouse

    Stover, C.W.; Barnhard, L.M.; Reagor, B.G.; Algermissen, S.T.

    1981-01-01

    The earthquake data shown on this map and listed in table 1 are a list of earthquakes that were originally used in preparing the Seismic Risk Studies in the United States (Algermissen, 1969) which have been recompiled and updated through 1977.  These data have been reexamined which resulted in some revisions of epicenters and intensities as well as assignment of intensities to earthquakes that previously had none assigened.  Intensity values were updated from new and additional data soureces that were not available at the time of original compilation.  Some epicenters were relocated on the basis of new informaition.  The data shown in table 1 are estimates of the most accurate epicenter, magnitude, and intensity of each earthquake, on the basis of historical and current information.  Some of the aftershocks from large earthquakes are listed but are incomplete in many instances, especialy for ones that occurred before seismic instruments were in universal usage.  Only earthquakes located within the borders of the states of Maine are listed.  This map superceeds Miscellaneous Field Sudies Map MF-845.

  7. Survey of whistler mode chorus intensity at Jupiter

    NASA Astrophysics Data System (ADS)

    Menietti, J. D.; Groene, J. B.; Averkamp, T. F.; Horne, R. B.; Woodfield, E. E.; Shprits, Y. Y.; Soria-Santacruz Pich, M.; Gurnett, D. A.

    2016-10-01

    Whistler mode chorus emission is important in the acceleration of electrons and filling of the radiation belts at Jupiter. In this work chorus magnetic intensity levels (frequency-integrated spectral density, PB) at Jupiter are comprehensively binned and parameterized. The frequency range of chorus under study extends from the lower hybrid frequency, flh, to fceq/2 and fceq/2 < f < 0.8 fceq, where fceq is the cyclotron frequency mapped to the magnetic equator. The goal is to obtain a quantized distribution of magnetic intensity for use in stochastic modeling efforts. Parametric fits of magnetic plasma wave intensity are obtained, including PB versus frequency, latitude, and L shell. The results indicate that Jupiter chorus occurrence probability and intensity are higher than those at Saturn, reaching values observed at Earth. Jovian chorus is observed over most local times, confined primarily to the range 8 < L < 15, outside the high densities of the Io torus. The largest intensity levels are seen on the dayside; however, the sampling of chorus on the nightside is much less than on the dayside. Peak intensities occur near the equator with a weak dependence on magnetic latitude, λ. We conclude that Jovian chorus average intensity levels are approximately an order of magnitude lower than those at Earth. In more isolated regions the intensities are comparable to those observed at Earth. The spatial range of the chorus emissions extends beyond that assumed in previous Jovian global diffusive models of wave-particle electron acceleration.

  8. Wetland inundation mapping and change monitoring using landsat and airborne LiDAR data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a new approach for mapping wetland inundation change using Landsat and LiDAR intensity data. In this approach, LiDAR data were used to derive highly accurate reference subpixel inundation percentage (SIP) maps at the 30-m resolution. The reference SIP maps were then used to est...

  9. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  10. Building Better Volcanic Hazard Maps Through Scientific and Stakeholder Collaboration

    NASA Astrophysics Data System (ADS)

    Thompson, M. A.; Lindsay, J. M.; Calder, E.

    2015-12-01

    All across the world information about natural hazards such as volcanic eruptions, earthquakes and tsunami is shared and communicated using maps that show which locations are potentially exposed to hazards of varying intensities. Unlike earthquakes and tsunami, which typically produce one dominant hazardous phenomenon (ground shaking and inundation, respectively) volcanic eruptions can produce a wide variety of phenomena that range from near-vent (e.g. pyroclastic flows, ground shaking) to distal (e.g. volcanic ash, inundation via tsunami), and that vary in intensity depending on the type and location of the volcano. This complexity poses challenges in depicting volcanic hazard on a map, and to date there has been no consistent approach, with a wide range of hazard maps produced and little evaluation of their relative efficacy. Moreover, in traditional hazard mapping practice, scientists analyse data about a hazard, and then display the results on a map that is then presented to stakeholders. This one-way, top-down approach to hazard communication does not necessarily translate into effective hazard education, or, as tragically demonstrated by Nevado del Ruiz, Columbia in 1985, its use in risk mitigation by civil authorities. Furthermore, messages taken away from a hazard map can be strongly influenced by its visual design. Thus, hazard maps are more likely to be useful, usable and used if relevant stakeholders are engaged during the hazard map process to ensure a) the map is designed in a relevant way and b) the map takes into account how users interpret and read different map features and designs. The IAVCEI Commission on Volcanic Hazards and Risk has recently launched a Hazard Mapping Working Group to collate some of these experiences in graphically depicting volcanic hazard from around the world, including Latin America and the Caribbean, with the aim of preparing some Considerations for Producing Volcanic Hazard Maps that may help map makers in the future.

  11. Creative Concept Mapping.

    ERIC Educational Resources Information Center

    Brown, David S.

    2002-01-01

    Recommends the use of concept mapping in science teaching and proposes that it be presented as a creative activity. Includes a sample lesson plan of a potato stamp concept mapping activity for astronomy. (DDR)

  12. RadMap

    EPA Pesticide Factsheets

    RadMap is an interactive desktop tool featuring a nationwide geographic information systems (GIS) map of long-term radiation monitoring locations across the United States with access to key information about the monitor and the area surrounding it.

  13. Using maps in genealogy

    USGS Publications Warehouse

    ,

    1994-01-01

    In genealogy, maps are most often used as clues to where public or other records about an ancestor are likely to be found. Searching for maps seldom begins until a newcomer to genealogy has mastered basic genealogical routines

  14. Active Fire Mapping Program

    MedlinePlus

    Active Fire Mapping Program Current Large Incidents (Home) New Large Incidents Fire Detection Maps MODIS Satellite Imagery VIIRS Satellite Imagery Fire Detection GIS Data Fire Data in Google Earth ...

  15. Riparian Wetlands: Mapping

    EPA Science Inventory

    Riparian wetlands are critical systems that perform functions and provide services disproportionate to their extent in the landscape. Mapping wetlands allows for better planning, management, and modeling, but riparian wetlands present several challenges to effective mapping due t...

  16. Oil Exploration Mapping

    NASA Technical Reports Server (NTRS)

    1994-01-01

    After concluding an oil exploration agreement with the Republic of Yemen, Chevron International needed detailed geologic and topographic maps of the area. Chevron's remote sensing team used imagery from Landsat and SPOT, combining images into composite views. The project was successfully concluded and resulted in greatly improved base maps and unique topographic maps.

  17. Adventures with Maps.

    ERIC Educational Resources Information Center

    Hofferber, Michael

    1989-01-01

    Orienteering--the game of following a map to find predetermined locations--can spark interest and develop skills in map making and map reading. This article gives background on orienteering; describes indoor and outdoor orienteering activities; offers suggestions for incorporating orienteering into science, math, and language arts; and provides a…

  18. Reading Angles in Maps

    ERIC Educational Resources Information Center

    Izard, Véronique; O'Donnell, Evan; Spelke, Elizabeth S.

    2014-01-01

    Preschool children can navigate by simple geometric maps of the environment, but the nature of the geometric relations they use in map reading remains unclear. Here, children were tested specifically on their sensitivity to angle. Forty-eight children (age 47:15-53:30 months) were presented with fragments of geometric maps, in which angle sections…

  19. Using maps in genealogy

    USGS Publications Warehouse

    ,

    1999-01-01

    Maps are one of many sources you may need to complete a family tree. In genealogical research, maps can provide clues to where our ancestors may have lived and where to look for written records about them. Beginners should master basic genealogical research techniques before starting to use topographic maps.

  20. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  1. What Do Maps Show?

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

  2. Mapping a Changing World.

    ERIC Educational Resources Information Center

    Stoltman, Joseph P.

    1992-01-01

    Addresses the importance of maps for instruction in both history and geography. Suggests that maps have gotten recent attention because of the rapid political changes occurring in Europe and the quincentenary of Columbus' voyage. Discusses different map projections and the importance of media and satellite display of real pictures of the world.…

  3. Mapping Sociological Concepts.

    ERIC Educational Resources Information Center

    Trepagnier, Barbara

    2002-01-01

    Focuses on the use of cognitive mapping within sociology. Describes an assignment where students created a cognitive map that focused on names of theorists and concepts related to them. Discusses sociological imagination in relation to cognitive mapping and the assessment of the assignment. (CMK)

  4. Adaptive Composite Map Projections.

    PubMed

    Jenny, B

    2012-12-01

    All major web mapping services use the web Mercator projection. This is a poor choice for maps of the entire globe or areas of the size of continents or larger countries because the Mercator projection shows medium and higher latitudes with extreme areal distortion and provides an erroneous impression of distances and relative areas. The web Mercator projection is also not able to show the entire globe, as polar latitudes cannot be mapped. When selecting an alternative projection for information visualization, rivaling factors have to be taken into account, such as map scale, the geographic area shown, the map's height-to-width ratio, and the type of cartographic visualization. It is impossible for a single map projection to meet the requirements for all these factors. The proposed composite map projection combines several projections that are recommended in cartographic literature and seamlessly morphs map space as the user changes map scale or the geographic region displayed. The composite projection adapts the map's geometry to scale, to the map's height-to-width ratio, and to the central latitude of the displayed area by replacing projections and adjusting their parameters. The composite projection shows the entire globe including poles; it portrays continents or larger countries with less distortion (optionally without areal distortion); and it can morph to the web Mercator projection for maps showing small regions.

  5. Mapping Wildfires In Nearly Real Time

    NASA Technical Reports Server (NTRS)

    Nichols, Joseph D.; Parks, Gary S.; Denning, Richard F.; Ibbott, Anthony C.; Scott, Kenneth C.; Sleigh, William J.; Voss, Jeffrey M.

    1993-01-01

    Airborne infrared-sensing system flies over wildfire as infrared detector in system and navigation subsystem generate data transmitted to firefighters' camp. There, data plotted in form of map of fire, including approximate variations of temperature. System, called Firefly, reveals position of fires and approximate thermal intensities of regions within fires. Firefighters use information to manage and suppress fires. Used for other purposes with minor modifications, such as to spot losses of heat in urban areas and to map disease and pest infestation in vegetation.

  6. Intensity formulas for triplet bands

    NASA Technical Reports Server (NTRS)

    Budo, A.

    1982-01-01

    Previous work in this area is surveyed and the mathematics involved in determining the quantitative intensity measurements in triplet bands is presented. Explicit expressions for the intensity distribution in the branches of the 3 Sigma-3 Pi and 1 Sigma-3Pi bands valid for all values of the coupling constant Y of the 3 Pi terms are given. The intensity distribution calculated according to the formulas given is compared with measurements of PH, 3 Pi-3 Sigma. Good quantitative agreement is obtained.

  7. Map projections for larger-scale mapping

    NASA Technical Reports Server (NTRS)

    Snyder, J. P.

    1982-01-01

    For the U.S. Geological Survey maps at 1:1,000,000-scale and larger, the most common projections are conformal, such as the Transverse Mercator and Lambert Conformal Conic. Projections for these scales should treat the Earth as an ellipsoid. In addition, the USGS has conceived and designed some new projections, including the Space Oblique Mercator, the first map projection designed to permit low-distortion mapping of the Earth from satellite imagery, continuously following the groundtrack. The USGS has programmed nearly all pertinent projection equations for inverse and forward calculations. These are used to plot maps or to transform coordinates from one projection to another. The projections in current use are described.

  8. Leaf position error during conformal dynamic arc and intensity modulated arc treatments.

    PubMed

    Ramsey, C R; Spencer, K M; Alhakeem, R; Oliver, A L

    2001-01-01

    Conformal dynamic arc (CD-ARC) and intensity modulated arc treatments (IMAT) are both treatment modalities where the multileaf collimator (MLC) can change leaf position dynamically during gantry rotation. These treatment techniques can be used to generate complex isodose distributions, similar to those used in fix-gantry intensity modulation. However, a beam-hold delay cannot be used during CD-ARC or IMAT treatments to reduce spatial error. Consequently, a certain amount of leaf position error will have to be accepted in order to make the treatment deliverable. Measurements of leaf position accuracy were taken with leaf velocities ranging from 0.3 to 3.0 cm/s. The average and maximum leaf position errors were measured, and a least-squares linear regression analysis was performed on the measured data to determine the MLC velocity error coefficient. The average position errors range from 0.03 to 0.21 cm, with the largest deviations occurring at the maximum achievable leaf velocity (3.0 cm/s). The measured MLC velocity error coefficient was 0.0674 s for a collimator rotation of 0 degrees and 0.0681 s for a collimator rotation of 90 degrees. The distribution in leaf position error between the 0 degrees and 90 degrees collimator rotations was within statistical uncertainty. A simple formula was developed based on these results for estimating the velocity-dependent dosimetric error. Using this technique, a dosimetric error index for plan evaluation can be calculated from the treatment time and the dynamic MLC leaf controller file.

  9. Sodium Velocity Maps on Mercury

    NASA Technical Reports Server (NTRS)

    Potter, A. E.; Killen, R. M.

    2011-01-01

    The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.

  10. Intensity attenuation in the Pannonian Basin

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Szanyi, Gyöngyvér

    2015-04-01

    Ground motion prediction equations play a key role in seismic hazard assessment. Earthquake hazard has to be expressed in macroseismic intensities in case of seismic risk estimations where a direct relation to the damage associated with ground shaking is needed. It can be also necessary for shake map generation where the map is used for prompt notification to the public, disaster management officers and insurance companies. Although only few instrumental strong motion data are recorded in the Pannonian Basin, there are numerous historical reports of past earthquakes since the 1763 Komárom earthquake. Knowing the intensity attenuation and comparing them with relations of other areas - where instrumental strong motion data also exist - can help us to choose from the existing instrumental ground motion prediction equations. The aim of this work is to determine an intensity attenuation formula for the inner part of the Pannonian Basin, which can be further used to find an adaptable ground motion prediction equation for the area. The crust below the Pannonian Basin is thin and warm and it is overlain by thick sediments. Thus the attenuation of seismic waves here is different from the attenuation in the Alp-Carpathian mountain belt. Therefore we have collected intensity data only from the inner part of the Pannonian Basin and defined the boundaries of the studied area by the crust thickness of 30 km (Windhoffer et al., 2005). 90 earthquakes from 1763 until 2014 have sufficient number of macroseismic data. Magnitude of the events varies from 3.0 to 6.6. We have used individual intensity points to eliminate the subjectivity of drawing isoseismals, the number of available intensity data is more than 3000. Careful quality control has been made on the dataset. The different types of magnitudes of the used earthquake catalogue have been converted to local and momentum magnitudes using relations determined for the Pannonian Basin. We applied the attenuation formula by Sorensen

  11. On genetic map functions

    SciTech Connect

    Zhao, Hongyu; Speed, T.P.

    1996-04-01

    Various genetic map functions have been proposed to infer the unobservable genetic distance between two loci from the observable recombination fraction between them. Some map functions were found to fit data better than others. When there are more than three markers, multilocus recombination probabilities cannot be uniquely determined by the defining property of map functions, and different methods have been proposed to permit the use of map functions to analyze multilocus data. If for a given map function, there is a probability model for recombination that can give rise to it, then joint recombination probabilities can be deduced from this model. This provides another way to use map functions in multilocus analysis. In this paper we show that stationary renewal processes give rise to most of the map functions in the literature. Furthermore, we show that the interevent distributions of these renewal processes can all be approximated quite well by gamma distributions. 43 refs., 4 figs.

  12. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  13. Relationships between peak ground acceleration, peak ground velocity, and modified mercalli intensity in California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.

    1999-01-01

    We have developed regression relationships between Modified Mercalli Intensity (Imm) and peak ground acceleration (PGA) and velocity (PGV) by comparing horizontal peak ground motions to observed intensities for eight significant California earthquakes. For the limited range of Modified Mercalli intensities (Imm), we find that for peak acceleration with V ??? Imm ??? VIII, Imm = 3.66 log(PGA) - 1.66, and for peak velocity with V ??? Imm ??? IX, Imm = 3.47 log(PGV) + 2.35. From comparison with observed intensity maps, we find that a combined regression based on peak velocity for intensity > VII and on peak acceleration for intensity < VII is most suitable for reproducing observed Imm patterns, consistent with high intensities being related to damage (proportional to ground velocity) and with lower intensities determined by felt accounts (most sensitive to higher-frequency ground acceleration). These new Imm relationships are significantly different from the Trifunac and Brady (1975) correlations, which have been used extensively in loss estimation.

  14. MPEG-4 AVC saliency map computation

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.

    2014-02-01

    A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.

  15. Macroseismic Intensities from the 2015 Gorkha, Nepal, Earthquake

    NASA Astrophysics Data System (ADS)

    Martin, S. S.; Hough, S. E.; Gahalaut, V. K.; Hung, C.

    2015-12-01

    The Mw 7.8 Gorkha, Nepal, earthquake, the largest central Himalayan earthquake in eighty-one years, yielded few instrumental recording of strong motion. To supplement these we collected 3800 detailed media and first-person accounts of macroseismic effects that included sufficiently detailed information to assign intensities. Our resultant macroseismic intensity map reveals the distribution of shaking in Nepal and the adjacent Gangetic basin. A key observation was that only in rare instances did near-field shaking intensities exceed intensity 8 on the European Macroseismic Scale (EMS), a level that corresponds with heavy damage or total collapse of many unengineered masonry structures. Within the Kathmandu Valley, intensities were generally 6-7 EMS, with generally lower intensities in the center of the valley than along the edges and foothills. This surprising (and fortunate) result can be explained by the nature of the mainshock ground motions, which were dominated by energy at periods significantly longer than the resonant periods of vernacular structures throughout Kathmandu. Outside the Kathmandu Valley the earthquake took a heavy toll on a number of remote villages, where many especially vulnerable masonry houses collapsed catastrophically in shaking equivalent to 7-8 EMS. Intensities were also generally higher along ridges and small hills, suggesting that topographic amplification played a significant role in controlling damage. The spatially rich intensity data set provides an opportunity to consider several key issues, including amplification of shaking in the Ganges basin, and the distribution of shaking across the rupture zone. Of note, relatively higher intensities within the near-field region are found to correlate with zones of enhanced high-frequency source radiation imaged by teleseismic back-projection (Avouac et al., 2015). We further reconsider intensities from a sequence of earthquakes on 26 August 1833, and conclude the largest of these ruptured

  16. An intense stratospheric jet on Jupiter.

    PubMed

    Flasar, F M; Kunde, V G; Achterberg, R K; Conrath, B J; Simon-Miller, A A; Nixon, C A; Gierasch, P J; Romani, P N; Bézard, B; Irwin, P; Bjoraker, G L; Brasunas, J C; Jennings, D E; Pearl, J C; Smith, M D; Orton, G S; Spilker, L J; Carlson, R; Calcutt, S B; Read, P L; Taylor, F W; Parrish, P; Barucci, A; Courtin, R; Coustenis, A; Gautier, D; Lellouch, E; Marten, A; Prangé, R; Biraud, Y; Fouchet, T; Ferrari, C; Owen, T C; Abbas, M M; Samuelson, R E; Raulin, F; Ade, P; Césarsky, C J; Grossman, K U; Coradini, A

    2004-01-08

    The Earth's equatorial stratosphere shows oscillations in which the east-west winds reverse direction and the temperatures change cyclically with a period of about two years. This phenomenon, called the quasi-biennial oscillation, also affects the dynamics of the mid- and high-latitude stratosphere and weather in the lower atmosphere. Ground-based observations have suggested that similar temperature oscillations (with a 4-5-yr cycle) occur on Jupiter, but these data suffer from poor vertical resolution and Jupiter's stratospheric wind velocities have not yet been determined. Here we report maps of temperatures and winds with high spatial resolution, obtained from spacecraft measurements of infrared spectra of Jupiter's stratosphere. We find an intense, high-altitude equatorial jet with a speed of approximately 140 m s(-1), whose spatial structure resembles that of a quasi-quadrennial oscillation. Wave activity in the stratosphere also appears analogous to that occurring on Earth. A strong interaction between Jupiter and its plasma environment produces hot spots in its upper atmosphere and stratosphere near its poles, and the temperature maps define the penetration of the hot spots into the stratosphere.

  17. Accelerators for Intensity Frontier Research

    SciTech Connect

    Derwent, Paul; /Fermilab

    2012-05-11

    In 2008, the Particle Physics Project Prioritization Panel identified three frontiers for research in high energy physics, the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. In this paper, I will describe how Fermilab is configuring and upgrading the accelerator complex, prior to the development of Project X, in support of the Intensity Frontier.

  18. High intensity solar cell radiometer

    NASA Technical Reports Server (NTRS)

    Brandhorst, H. W.; Spisz, E. W.

    1972-01-01

    Device can be employed under high intensity illumination conditions such as would occur in a close-solar-approach space mission or in monitoring high intensity lamps. Radiometer consists of silicon solar cells with thin semi-transparent coatings of aluminum deposited on the front surfaces to permit transmission of small percentage of light and reflect the remainder.

  19. Microwave ovens: mapping the electrical field distribution.

    PubMed

    Ng, K H

    1991-07-01

    Uniformity of electric field intensity of microwaves within the microwave oven cavity is necessary to ensure even load-heating, and is particularly important in pathology procedures where small volume irradiation is carried out. A simple and rapid method for mapping electric field distribution, using reversible thermographic paint, is described. Spatial heating patterns for various positions, and the effects of introducing dummy loads to modify heating distributions, have been obtained for a dedicated microwave processor, and comparison made with a domestic microwave oven.

  20. Comparing landslide inventory maps

    NASA Astrophysics Data System (ADS)

    Galli, Mirco; Ardizzone, Francesca; Cardinali, Mauro; Guzzetti, Fausto; Reichenbach, Paola

    Landslide inventory maps are effective and easily understandable products for both experts, such as geomorphologists, and for non experts, including decision-makers, planners, and civil defense managers. Landslide inventories are essential to understand the evolution of landscapes, and to ascertain landslide susceptibility and hazard. Despite landslide maps being compiled every year in the word at different scales, limited efforts are made to critically compare landslide maps prepared using different techniques or by different investigators. Based on the experience gained in 20 years of landslide mapping in Italy, and on the limited literature on landslide inventory assessment, we propose a general framework for the quantitative comparison of landslide inventory maps. To test the proposed framework we exploit three inventory maps. The first map is a reconnaissance landslide inventory prepared for the Umbria region, in central Italy. The second map is a detailed geomorphological landslide map, also prepared for the Umbria region. The third map is a multi-temporal landslide inventory compiled for the Collazzone area, in central Umbria. Results of the experiment allow for establishing how well the individual inventories describe the location, type and abundance of landslides, to what extent the landslide maps can be used to determine the frequency-area statistics of the slope failures, and the significance of the inventory maps as predictors of landslide susceptibility. We further use the results obtained in the Collazzone area to estimate the quality and completeness of the two regional landslide inventory maps, and to outline general advantages and limitations of the techniques used to complete the inventories.

  1. Individualized diffeomorphic mapping of brains with large cortical infarcts.

    PubMed

    Soon, Hock Wei; Qiu, Anqi

    2015-01-01

    Whole brain mapping of stroke patients with large cortical infarcts is not trivial due to the complexity of infarcts' anatomical location and appearance in magnetic resonance image. In this study, we proposed an individualized diffeomorphic mapping framework for solving this problem. This framework is based on our recent work of large deformation diffeomorphic metric mapping (LDDMM) in Du et al. (2011) and incorporates anatomical features, such as sulcal/gyral curves, cortical surfaces, brain intensity image, and masks of infarcted regions, in order to align a normal brain to the brain of stroke patients. We applied this framework to synthetic data and data of stroke patients and validated the mapping accuracy in terms of the alignment of gyral/sulcal curves, sulcal regions, and brain segmentation. Our results revealed that this framework provided comparable mapping results for stroke patients and healthy controls, suggesting the importance of incorporating individualized anatomical features in whole brain mapping of brains with large cortical infarcts.

  2. Segmentation and leaf sequencing for intensity modulated arc therapy

    SciTech Connect

    Gladwish, Adam; Oliver, Mike; Craig, Jeff; Chen, Jeff; Bauman, Glenn; Fisher, Barbara; Wong, Eugene

    2007-05-15

    A common method in generating intensity modulated radiation therapy (IMRT) plans consists of a three step process: an optimized fluence intensity map (IM) for each beam is generated via inverse planning, this IM is then segmented into discrete levels, and finally, the segmented map is translated into a set of MLC apertures via a leaf sequencing algorithm. To date, limited work has been done on this approach as it pertains to intensity modulated arc therapy (IMAT), specifically in regards to the latter two steps. There are two determining factors that separate IMAT segmentation and leaf sequencing from their IMRT equivalents: (1) the intrinsic 3D nature of the intensity maps (standard 2D maps plus the angular component), and (2) that the dynamic multileaf collimator (MLC) constraints be met using a minimum number of arcs. In this work, we illustrate a technique to create an IMAT plan that replicates Tomotherapy deliveries by applying IMAT specific segmentation and leaf-sequencing algorithms to Tomotherapy output sinograms. We propose and compare two alternative segmentation techniques, a clustering method, and a bottom-up segmentation method (BUS). We also introduce a novel IMAT leaf-sequencing algorithm that explicitly takes leaf movement constraints into consideration. These algorithms were tested with 51 angular projections of the output leaf-open sinograms generated on the Hi-ART II treatment planning system (Tomotherapy Inc.). We present two geometric phantoms and 2 clinical scenarios as sample test cases. In each case 12 IMAT plans were created, ranging from 2 to 7 intensity levels. Half were generated using the BUS segmentation and half with the clustering method. We report on the number of arcs produced as well as differences between Tomotherapy output sinograms and segmented IMAT intensity maps. For each case one plan for each segmentation method is chosen for full Monte Carlo dose calculation (NumeriX LLC) and dose volume histograms (DVH) are calculated

  3. Intensive insulin therapy in the intensive cardiac care unit.

    PubMed

    Hasin, Tal; Eldor, Roy; Hammerman, Haim

    2006-01-01

    Treatment in the intensive cardiac care unit (ICCU) enables rigorous control of vital parameters such as heart rate, blood pressure, body temperature, oxygen saturation, serum electrolyte levels, urine output and many others. The importance of controlling the metabolic status of the acute cardiac patient and specifically the level of serum glucose was recently put in focus but is still underscored. This review aims to explain the rationale for providing intensive control of serum glucose levels in the ICCU, especially using intensive insulin therapy and summarizes the available clinical evidence suggesting its effectiveness.

  4. SMOS sea surface salinity maps of the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Gabarro, Carolina; Olmedo, Estrella; Turiel, Antonio; Ballabrera-Poy, Joaquim; Martinez, Justino; Portabella, Marcos

    2016-04-01

    Salinity and temperature gradients drive the thermohaline circulation of the oceans, and play a key role in the ocean-atmosphere coupling. The strong and direct interactions between the ocean and the cryosphere (primarily through sea ice and ice shelves) is also a key ingredient of the thermohaline circulation. The ESA's Soil Moisture and Ocean Salinity (SMOS) mission, launched in 2009, has the objective measuring soil moisture over the continents and sea surface salinity over the oceans. Although the mission was originally conceived for hydrological and oceanographic studies [1], SMOS is also making inroads in the cryospheric monitoring. SMOS carries an innovative L-band (1.4 GHz, or 21-cm wavelength), passive interferometric radiometer (the so-called MIRAS) that measures the electromagnetic radiation emitted by the Earth's surface, at about 50 km spatial resolution wide swath (1200-km), and with a 3-day revisit time at the equator, but a more frequent one at the poles. Although the SMOS radiometer operating frequency offers almost the maximum sensitivity of the brightness temperature (TB) to sea surface salinity (SSS) variations, this is rather low, , i.e.,: 90% of ocean SSS values span a range of brightness temperatures of only 5K at L-band. This sensitivity is particularly low in cold waters. This implies that the SSS retrieval requires high radiometric performance. Since the SMOS launch, SSS Level 3 maps have been distributed by several expert laboratories including the Barcelona Expert Centre (BEC). However, since the TB sensitivity to SSS decreases with decreasing sea surface temperature (SST), large retrieval errors had been reported when retrieving salinity values at latitudes above 50⁰N. Two new processing algorithms, recently developed at BEC, have led to a considerable improvement of the SMOS data, allowing for the first time to derive SSS maps in cold waters. The first one is to empirically characterize and correct the systematic biases with six

  5. Film Dosimetry for Intensity Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Benites-Rengifo, J.; Martínez-Dávalos, A.; Celis, M.; Lárraga, J.

    2004-09-01

    Intensity Modulated Radiation Therapy (IMRT) is an oncology treatment technique that employs non-uniform beam intensities to deliver highly conformal radiation to the targets while minimizing doses to normal tissues and critical organs. A key element for a successful clinical implementation of IMRT is establishing a dosimetric verification process that can ensure that delivered doses are consistent with calculated ones for each patient. To this end we are developing a fast quality control procedure, based on film dosimetry techniques, to be applied to the 6 MV Novalis linear accelerator for IMRT of the Instituto Nacional de Neurología y Neurocirugía (INNN) in Mexico City. The procedure includes measurements of individual fluence maps for a limited number of fields and dose distributions in 3D using extended dose-range radiographic film. However, the film response to radiation might depend on depth, energy and field size, and therefore compromise the accuracy of measurements. In this work we present a study of the dependence of Kodak EDR2 film's response on the depth, field size and energy, compared with those of Kodak XV2 film. The first aim is to devise a fast and accurate method to determine the calibration curve of film (optical density vs. doses) commonly called a sensitometric curve. This was accomplished by using three types of irradiation techniques: Step-and-shoot, dynamic and static fields.

  6. Fractional dissipative standard map.

    PubMed

    Tarasov, Vasily E; Edelman, M

    2010-06-01

    Using kicked differential equations of motion with derivatives of noninteger orders, we obtain generalizations of the dissipative standard map. The main property of these generalized maps, which are called fractional maps, is long-term memory. The memory effect in the fractional maps means that their present state of evolution depends on all past states with special forms of weights. Already a small deviation of the order of derivative from the integer value corresponding to the regular dissipative standard map (small memory effects) leads to the qualitatively new behavior of the corresponding attractors. The fractional dissipative standard maps are used to demonstrate a new type of fractional attractors in the wide range of the fractional orders of derivatives.

  7. Intensity-Modulated Radiation Therapy (IMRT)

    MedlinePlus

    ... Resources Professions Site Index A-Z Intensity-Modulated Radiation Therapy (IMRT) Intensity-modulated radiotherapy (IMRT) uses linear ... and after this procedure? What is Intensity-Modulated Radiation Therapy and how is it used? Intensity-modulated ...

  8. Heat Capacity Mapping Mission

    NASA Technical Reports Server (NTRS)

    Nilsson, C. S.; Andrews, J. C.; Scully-Power, P.; Ball, S.; Speechley, G.; Latham, A. R. (Principal Investigator)

    1980-01-01

    The Tasman Front was delineated by airborne expendable bathythermograph survey; and an Heat Capacity Mapping Mission (HCMM) IR image on the same day shows the same principal features as determined from ground-truth. It is clear that digital enhancement of HCMM images is necessary to map ocean surface temperatures and when done, the Tasman Front and other oceanographic features can be mapped by this method, even through considerable scattered cloud cover.

  9. Intense low energy positron beams

    SciTech Connect

    Lynn, K.G.; Jacobsen, F.M.

    1993-12-31

    Intense positron beams are under development or being considered at several laboratories. Already today a few accelerator based high intensity, low brightness e{sup +} beams exist producing of the order of 10{sup 8} {minus} 10{sup 9} e{sup +}/sec. Several laboratories are aiming at high intensity, high brightness e{sup +} beams with intensities greater than 10{sup 9} e{sup +}/sec and current densities of the order of 10{sup 13} {minus} 10{sup 14} e{sup +} sec{sup {minus}} {sup 1}cm{sup {minus}2}. Intense e{sup +} beams can be realized in two ways (or in a combination thereof) either through a development of more efficient B{sup +} moderators or by increasing the available activity of B{sup +} particles. In this review we shall mainly concentrate on the latter approach. In atomic physics the main trust for these developments is to be able to measure differential and high energy cross-sections in e{sup +} collisions with atoms and molecules. Within solid state physics high intensity, high brightness e{sup +} beams are in demand in areas such as the re-emission e{sup +} microscope, two dimensional angular correlation of annihilation radiation, low energy e{sup +} diffraction and other fields. Intense e{sup +} beams are also important for the development of positronium beams, as well as exotic experiments such as Bose condensation and Ps liquid studies.

  10. BOREAS Hardcopy Maps

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nelson, Elizabeth; Newcomer, Jeffrey A.

    2000-01-01

    Boreal Ecosystem-Atmospheric Study (BOREAS) hardcopy maps are a collection of approximately 1,000 hardcopy maps representing the physical, climatological, and historical attributes of areas covering primarily the Manitoba and Saskatchewan provinces of Canada. These maps were collected by BOREAS Information System (BORIS) and Canada for Remote Sensing (CCRS) staff to provide basic information about site positions, manmade features, topography, geology, hydrology, land cover types, fire history, climate, and soils of the BOREAS study region. These maps are not available for distribution through the BOREAS project but may be used as an on-site resource. Information is provided within this document for individuals who want to order copies of these maps from the original map source. Note that the maps are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of the maps that are available. This inventory listing is available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). For hardcopies of the individual maps, contact the sources provided.

  11. Denali image map

    USGS Publications Warehouse

    Binnie, Douglas R.; Colvocoresses, Alden P.

    1987-01-01

    The Denali National Park and Preserve 1:250,000-scale image map has been prepared and published as part of the US Geological Survey's (USGS) continuing research to improve image mapping techniques. Nine multispectral scanner (MSS) images were geometrically corrected, digitally mosaicked, and enhanced at the National Mapping Division's (NMD) EROS Data Center (EDC). This process involves ground control and digital resampling to the Universal Tranverse Mercator (UTM) projection. This paper specifically discusses the preparation of the digital mosaic and the production peculiarities associated with the Denali National Park and Preserve image map.

  12. Seismic Hazard Assessment of Tehran Based on Arias Intensity

    SciTech Connect

    Amiri, G. Ghodrati; Mahmoodi, H.; Amrei, S. A. Razavian

    2008-07-08

    In this paper probabilistic seismic hazard assessment of Tehran for Arias intensity parameter is done. Tehran is capital and most populated city of Iran. From economical, political and social points of view, Tehran is the most significant city of Iran. Since in the previous centuries, catastrophic earthquakes have occurred in Tehran and its vicinity, probabilistic seismic hazard assessment of this city for Arias intensity parameter is useful. Iso-intensity contour lines maps of Tehran on the basis of different attenuation relationships for different earthquake periods are plotted. Maps of iso-intensity points in the Tehran region are presented using proportional attenuation relationships for rock and soil beds for 2 hazard levels of 10% and 2% in 50 years. Seismicity parameters on the basis of historical and instrumental earthquakes for a time period that initiate from 4th century BC and ends in the present time are calculated using Tow methods. For calculation of seismicity parameters, the earthquake catalogue with a radius of 200 km around Tehran has been used. SEISRISKIII Software has been employed. Effects of different parameters such as seismicity parameters, length of fault rupture relationships and attenuation relationships are considered using Logic Tree.

  13. Seismicity map of the State of Illinois

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1979-01-01

    The earthquake data shown on this map and listed in table 1 are a list of earthquakes that were originally used in preparing the Seismic Risk Studies in the United States (Algermissen, 1969) which have been recompiled and updated through 1977. The data have been reexamined and intensities assigned where none had been assigned before, on the basis of available data. Other intensity values were updated from new and additional data sources that were not available at the time of original compilation. Some epicenters were relocated on the basis of new information. The data shown in table 1 are estimates of the most accurate epicenter, magnitude, and intensity of teach earthquake, on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed but are incomplete in many instances, especially for the ones that occurred before seismic instruments were in universal usage. 

  14. Neutral particle beam intensity controller

    DOEpatents

    Dagenhart, W.K.

    1984-05-29

    The neutral beam intensity controller is based on selected magnetic defocusing of the ion beam prior to neutralization. The defocused portion of the beam is dumped onto a beam dump disposed perpendicular to the beam axis. Selective defocusing is accomplished by means of a magnetic field generator disposed about the neutralizer so that the field is transverse to the beam axis. The magnetic field intensity is varied to provide the selected partial beam defocusing of the ions prior to neutralization. The desired focused neutral beam portion passes along the beam path through a defining aperture in the beam dump, thereby controlling the desired fraction of neutral particles transmitted to a utilization device without altering the kinetic energy level of the desired neutral particle fraction. By proper selection of the magnetic field intensity, virtually zero through 100% intensity control of the neutral beam is achieved.

  15. Multipole expansions and intense fields

    NASA Astrophysics Data System (ADS)

    Reiss, Howard R.

    1984-02-01

    In the context of two-body bound-state systems subjected to a plane-wave electromagnetic field, it is shown that high field intensity introduces a distinction between long-wavelength approximation and electric dipole approximation. This distinction is gauge dependent, since it is absent in Coulomb gauge, whereas in "completed" gauges of Göppert-Mayer type the presence of high field intensity makes electric quadrupole and magnetic dipole terms of importance equal to electric dipole at long wavelengths. Another consequence of high field intensity is that multipole expansions lose their utility in view of the equivalent importance of a number of low-order multipole terms and the appearance of large-magnitude terms which defy multipole categorization. This loss of the multipole expansion is gauge independent. Also gauge independent is another related consequence of high field intensity, which is the intimate coupling of center-of-mass and relative coordinate motions in a two-body system.

  16. Gamma radiation field intensity meter

    DOEpatents

    Thacker, L.H.

    1994-08-16

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode. 4 figs.

  17. Gamma radiation field intensity meter

    DOEpatents

    Thacker, Louis H.

    1995-01-01

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode.

  18. Gamma radiation field intensity meter

    DOEpatents

    Thacker, Louis H.

    1994-01-01

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode.

  19. Intensive Care Information System Impacts

    PubMed Central

    Ehteshami, Asghar; Sadoughi, Farahnaz; Ahmadi, Maryam; Kashefi, Parviz

    2013-01-01

    Introduction: Today, intensive care needs to be increased with a prospect of an aging population and socioeconomic factors influencing health intervention, but there are some problems in the intensive care environments, it is essential to resolve. The intensive Care information system has the potential to solve many of ICU problems. The objective of the review was to establish the impact of intensive care information systems on the practitioners practice, patient outcomes and ICU performance. Methods: Scientific databases and electronic journal citations was searched to identify articles that discussed the impacts of intensive care information system on the practices, patient outcomes and ICU performance. A total of 22 articles discussing ICIS outcomes was included in this study from 609 articles initially obtained from the searches. Results: Pooling data across studies, we found that the median impact of ICIS on information management was 48.7%. The median impact of ICIS on user’ outcomes was 36.4%, impact on saving tips by 24%, clinical decision support by a mean of 22.7%, clinical outcomes improved by a mean of 18.6%, and researches improved by 18%. Conclusion: The functionalities of ICIS are growing day by day and new functionalities are available with every major release. Better adoption of ICIS by the intensive care environments emphasizes the opportunity of better intensive care services through patient oriented intensive care clinical information systems. There is an immense need for developing guidelines for standardizing ICIS to to maximize the power of ICISs and to integrate with HISs. This will enable intensivists to use the systems in a more meaningful way for better patient care. This study provides a better understanding and greater insight into the effectiveness of ICIS in improving patient care and reducing health care expenses. PMID:24167389

  20. High intensity protons in RHIC

    SciTech Connect

    Montag, C.; Ahrens, L.; Blaskiewicz, M.; Brennan, J. M.; Drees, K. A.; Fischer, W.; Huang, H.; Minty, M.; Robert-Demolaize, G.; Thieberger, P.; Yip, K.

    2012-01-05

    During the 2012 summer shutdown a pair of electron lenses will be installed in RHIC, allowing the beam-beam parameter to be increased by roughly 50 percent. To realize the corresponding luminosity increase bunch intensities have to be increased by 50 percent, to 2.5 {center_dot} 10{sup 11} protons per bunch. We list the various RHIC subsystems that are most affected by this increase, and propose beam studies to ensure their readiness. The proton luminosity in RHIC is presently limited by the beam-beam effect. To overcome this limitation, electron lenses will be installed in IR10. With the help of these devices, the headon beam-beam kick experienced during proton-proton collisions will be partially compensated, allowing for a larger beam-beam tuneshift at these collision points, and therefore increasing the luminosity. This will be accomplished by increasing the proton bunch intensity from the presently achieved 1.65 {center_dot} 10{sup 11} protons per bunch in 109 bunches per beam to 2.5 {center_dot} 10{sup 11}, thus roughly doubling the luminosity. In a further upgrade we aim for bunch intensities up to 3 {center_dot} 10{sup 11} protons per bunch. With RHIC originally being designed for a bunch intensity of 1 {center_dot} 10{sup 11} protons per bunch in 56 bunches, this six-fold increase in the total beam intensity by far exceeds the design parameters of the machine, and therefore potentially of its subsystems. In this note, we present a list of major subsystems that are of potential concern regarding this intensity upgrade, show their demonstrated performance at present intensities, and propose measures and beam experiments to study their readiness for the projected future intensities.

  1. Gamma radiation field intensity meter

    SciTech Connect

    Thacker, L.H.

    1995-10-17

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode. 4 figs.

  2. Occupancy Grid Map Merging Using Feature Maps

    DTIC Science & Technology

    2010-11-01

    Gonzalez, “Toward a unified bayesian approach to hybrid metric-topological SLAM,” IEEE Transactions on Robotics , 24(2), April 2008, 259-270. [14] G...Risetti, C. Stachniss, and W. Burgard, “Improved Techniques for grid mapping with Rao-Blackwellized Particle Filter,” IEEE Transactions on Robotics , 23

  3. Maps and Map Learning in Social Studies

    ERIC Educational Resources Information Center

    Bednarz, Sarah Witham; Acheson, Gillian; Bednarz, Robert S.

    2006-01-01

    The importance of maps and other graphic representations has become more important to geography and geographers. This is due to the development and widespread diffusion of geographic (spatial) technologies. As computers and silicon chips have become more capable and less expensive, geographic information systems (GIS), global positioning satellite…

  4. Did you feel it? Community-made earthquake shaking maps

    USGS Publications Warehouse

    Wald, D.J.; Wald, L.A.; Dewey, J.W.; Quitoriano, Vince; Adams, Elisabeth

    2001-01-01

    Since the early 1990's, the magnitude and location of an earthquake have been available within minutes on the Internet. Now, as a result of work by the U.S. Geological Survey (USGS) and with the cooperation of various regional seismic networks, people who experience an earthquake can go online and share information about its effects to help create a map of shaking intensities and damage. Such 'Community Internet Intensity Maps' (CIIM's) contribute greatly in quickly assessing the scope of an earthquake emergency, even in areas lacking seismic instruments.

  5. High-intensity focused ultrasound with large scale spherical phased array for the ablation of deep tumors.

    PubMed

    Ji, Xiang; Bai, Jing-feng; Shen, Guo-feng; Chen, Ya-zhu

    2009-09-01

    Under some circumstances surgical resection is feasible in a low percentage for the treatment of deep tumors. Nevertheless, high-intensity focused ultrasound (HIFU) is beginning to offer a potential noninvasive alternative to conventional therapies for the treatment of deep tumors. In our previous study, a large scale spherical HIFU-phased array was developed to ablate deep tumors. In the current study, taking into account the required focal depth and maximum acoustic power output, 90 identical circular PZT-8 elements (diameter =1.4 cm and frequency=1 MHz) were mounted on a spherical shell with a radius of curvature of 18 cm and a diameter of 21 cm. With the developed array, computer simulations and ex vivo experiments were carried out. The simulation results theoretically demonstrate the ability of the array to focus and steer in the specified volume (a 2 cmx2 cmx3 cm volume) at the focal depth of 15 to 18 cm. Ex vivo experiment results also verify the capability of the developed array to ablate deep target tissue by either moving single focal point or generating multiple foci simultaneously.

  6. Sao Paulo Map Collections.

    ERIC Educational Resources Information Center

    McLean, G. Robert

    1985-01-01

    Describes geographical, subject, and chronological aspects of 25 cartographic collections housed in university, public, special, state, and semi-state libraries in Sao Paulo, Brazil. Three size categories of map holdings (more than 10,000, 1,000-10,000, less than 1,000) are distinguished. A list of 27 Sao Paulo institutions housing map collections…

  7. Managing Vocabulary Mapping Services

    PubMed Central

    Che, Chengjian; Monson, Kent; Poon, Kasey B.; Shakib, Shaun C.; Lau, Lee Min

    2005-01-01

    The efficient management and maintenance of large-scale and high-quality vocabulary mapping is an operational challenge. The 3M Health Information Systems (HIS) Healthcare Data Dictionary (HDD) group developed an information management system to provide controlled mapping services, resulting in improved efficiency and quality maintenance. PMID:16779203

  8. Map of Nasca Geoglyphs

    NASA Astrophysics Data System (ADS)

    Hanzalová, K.; Pavelka, K.

    2013-07-01

    The Czech Technical University in Prague in the cooperation with the University of Applied Sciences in Dresden (Germany) work on the Nasca Project. The cooperation started in 2004 and much work has been done since then. All work is connected with Nasca lines in southern Peru. The Nasca project started in 1995 and its main target is documentation and conservation of the Nasca lines. Most of the project results are presented as WebGIS application via Internet. In the face of the impending destruction of the soil drawings, it is possible to preserve this world cultural heritage for the posterity at least in a digital form. Creating of Nasca lines map is very useful. The map is in a digital form and it is also available as a paper map. The map contains planimetric component of the map, map lettering and altimetry. Thematic folder in this map is a vector layer of the geoglyphs in Nasca/Peru. Basis for planimetry are georeferenced satellite images, altimetry is created from digital elevation model. This map was created in ArcGis software.

  9. Chizu Task Mapping Tool

    SciTech Connect

    2014-07-01

    Chizu is a tool for Mapping MPI processes or tasks to physical processors or nodes for optimizing communication performance. It takes the communication graph of a High Performance Computing (HPC) application and the interconnection topology of a supercomputer as input. It outputs a new MPI rand to processor mapping, which can be used when launching the HPC application.

  10. Mapping the Llano Estacado

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Early maps of North America, prepared in the 18th and early 19th centuries, often depicted the Llano Estacado as a conspicuous blank spot - a terra incognita. A good example is a map of the southwest sketched by Alexander von Humboldt in 1804. In 1830, Stephen F. Austin added little detail to the ...

  11. Acoustic mapping velocimetry

    NASA Astrophysics Data System (ADS)

    Muste, M.; Baranya, S.; Tsubaki, R.; Kim, D.; Ho, H.; Tsai, H.; Law, D.

    2016-05-01

    Knowledge of sediment dynamics in rivers is of great importance for various practical purposes. Despite its high relevance in riverine environment processes, the monitoring of sediment rates remains a major and challenging task for both suspended and bed load estimation. While the measurement of suspended load is currently an active area of testing with nonintrusive technologies (optical and acoustic), bed load measurement does not mark a similar progress. This paper describes an innovative combination of measurement techniques and analysis protocols that establishes the proof-of-concept for a promising technique, labeled herein Acoustic Mapping Velocimetry (AMV). The technique estimates bed load rates in rivers developing bed forms using a nonintrusive measurements approach. The raw information for AMV is collected with acoustic multibeam technology that in turn provides maps of the bathymetry over longitudinal swaths. As long as the acoustic maps can be acquired relatively quickly and the repetition rate for the mapping is commensurate with the movement of the bed forms, successive acoustic maps capture the progression of the bed form movement. Two-dimensional velocity maps associated with the bed form migration are obtained by implementing algorithms typically used in particle image velocimetry to acoustic maps converted in gray-level images. Furthermore, use of the obtained acoustic and velocity maps in conjunction with analytical formulations (e.g., Exner equation) enables estimation of multidirectional bed load rates over the whole imaged area. This paper presents a validation study of the AMV technique using a set of laboratory experiments.

  12. What do maps show?

    USGS Publications Warehouse

    ,

    1994-01-01

    The purpose of the teaching package is to help students understand and use maps. The U.S. Geological Survey (USGS) has provided the package as a service to educators so that more Americans will learn to understand the world of information on maps. Everything in the package teaches and reinforces geographic skills that are required in your curriculum.

  13. BenMAP Downloads

    EPA Pesticide Factsheets

    Download the current and legacy versions of the BenMAP program. Download configuration and aggregation/pooling/valuation files to estimate benefits. BenMAP-CE is free and open source software, and the source code is available upon request.

  14. The Map Corner.

    ERIC Educational Resources Information Center

    Cheyney, Arnold B.; Capone, Donald L.

    This teaching resource is aimed at helping students develop the skills necessary to locate places on the earth. Designed as a collection of map skill exercises rather than a sequential program of study, this program expects that students have access to and some knowledge of how to use globes, maps, atlases, and encyclopedias. The volume contains 6…

  15. Coupled trivial maps.

    PubMed

    Bunimovich, L. A.; Livi, R.; Martinez-Mekler, G.; Ruffo, S.

    1992-07-01

    The first nontrivial example of coupled map lattices that admits a rigorous analysis in the whole range of the strength of space interactions is considered. This class is generated by one-dimensional maps with a globally attracting superstable periodic trajectory that are coupled by a diffusive nearest-neighbor interaction.

  16. Temporal mapping and analysis

    NASA Technical Reports Server (NTRS)

    O'Hara, Charles G. (Inventor); Shrestha, Bijay (Inventor); Vijayaraj, Veeraraghavan (Inventor); Mali, Preeti (Inventor)

    2011-01-01

    A compositing process for selecting spatial data collected over a period of time, creating temporal data cubes from the spatial data, and processing and/or analyzing the data using temporal mapping algebra functions. In some embodiments, the temporal data cube is creating a masked cube using the data cubes, and computing a composite from the masked cube by using temporal mapping algebra.

  17. World Stress Map Published

    NASA Astrophysics Data System (ADS)

    Heidbach, Oliver; Müller, Birgit; Fuchs, Karl; Wenzel, Friedemann; Reinecker, John; Tingay, Mark; Sperner, Blanka; Cadet, Jean-Paul; Rossi, Philipp

    2007-11-01

    The World Stress Map (WSM), published in April 2007 by the Commission for the Geological Map of the World and the Heidelberg Academy of Sciences and Humanities, displays the tectonic regime and the orientation of the contemporary maximum horizontal compressional stress at more than 12,000 locations within the Earth's crust. The Mercator projection is a scale of 1:46,000,000.

  18. Seismicity map of the state of North Carolina

    USGS Publications Warehouse

    Reagor, B.G.; Stover, C.W.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of a11 earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  19. Factorized Diffusion Map Approximation.

    PubMed

    Amizadeh, Saeed; Valizadegan, Hamed; Hauskrecht, Milos

    2012-01-01

    Diffusion maps are among the most powerful Machine Learning tools to analyze and work with complex high-dimensional datasets. Unfortunately, the estimation of these maps from a finite sample is known to suffer from the curse of dimensionality. Motivated by other machine learning models for which the existence of structure in the underlying distribution of data can reduce the complexity of estimation, we study and show how the factorization of the underlying distribution into independent subspaces can help us to estimate diffusion maps more accurately. Building upon this result, we propose and develop an algorithm that can automatically factorize a high dimensional data space in order to minimize the error of estimation of its diffusion map, even in the case when the underlying distribution is not decomposable. Experiments on both the synthetic and real-world datasets demonstrate improved estimation performance of our method over the standard diffusion-map framework.

  20. Factorized Diffusion Map Approximation

    PubMed Central

    Amizadeh, Saeed; Valizadegan, Hamed; Hauskrecht, Milos

    2013-01-01

    Diffusion maps are among the most powerful Machine Learning tools to analyze and work with complex high-dimensional datasets. Unfortunately, the estimation of these maps from a finite sample is known to suffer from the curse of dimensionality. Motivated by other machine learning models for which the existence of structure in the underlying distribution of data can reduce the complexity of estimation, we study and show how the factorization of the underlying distribution into independent subspaces can help us to estimate diffusion maps more accurately. Building upon this result, we propose and develop an algorithm that can automatically factorize a high dimensional data space in order to minimize the error of estimation of its diffusion map, even in the case when the underlying distribution is not decomposable. Experiments on both the synthetic and real-world datasets demonstrate improved estimation performance of our method over the standard diffusion-map framework. PMID:25309676

  1. Geologic map of Mars

    USGS Publications Warehouse

    Tanaka, Kenneth L.; Skinner, James A.; Dohm, James M.; Irwin, Rossman P.; Kolb, Eric J.; Fortezzo, Corey M.; Platz, Thomas; Michael, Gregory G.; Hare, Trent M.

    2014-01-01

    This global geologic map of Mars, which records the distribution of geologic units and landforms on the planet's surface through time, is based on unprecedented variety, quality, and quantity of remotely sensed data acquired since the Viking Orbiters. These data have provided morphologic, topographic, spectral, thermophysical, radar sounding, and other observations for integration, analysis, and interpretation in support of geologic mapping. In particular, the precise topographic mapping now available has enabled consistent morphologic portrayal of the surface for global mapping (whereas previously used visual-range image bases were less effective, because they combined morphologic and albedo information and, locally, atmospheric haze). Also, thermal infrared image bases used for this map tended to be less affected by atmospheric haze and thus are reliable for analysis of surface morphology and texture at even higher resolution than the topographic products.

  2. Bodily maps of emotions

    PubMed Central

    Nummenmaa, Lauri; Glerean, Enrico; Hari, Riitta; Hietanen, Jari K.

    2014-01-01

    Emotions are often felt in the body, and somatosensory feedback has been proposed to trigger conscious emotional experiences. Here we reveal maps of bodily sensations associated with different emotions using a unique topographical self-report method. In five experiments, participants (n = 701) were shown two silhouettes of bodies alongside emotional words, stories, movies, or facial expressions. They were asked to color the bodily regions whose activity they felt increasing or decreasing while viewing each stimulus. Different emotions were consistently associated with statistically separable bodily sensation maps across experiments. These maps were concordant across West European and East Asian samples. Statistical classifiers distinguished emotion-specific activation maps accurately, confirming independence of topographies across emotions. We propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps. Perception of these emotion-triggered bodily changes may play a key role in generating consciously felt emotions. PMID:24379370

  3. Iconicity as structure mapping

    PubMed Central

    Emmorey, Karen

    2014-01-01

    Linguistic and psycholinguistic evidence is presented to support the use of structure-mapping theory as a framework for understanding effects of iconicity on sign language grammar and processing. The existence of structured mappings between phonological form and semantic mental representations has been shown to explain the nature of metaphor and pronominal anaphora in sign languages. With respect to processing, it is argued that psycholinguistic effects of iconicity may only be observed when the task specifically taps into such structured mappings. In addition, language acquisition effects may only be observed when the relevant cognitive abilities are in place (e.g. the ability to make structural comparisons) and when the relevant conceptual knowledge has been acquired (i.e. information key to processing the iconic mapping). Finally, it is suggested that iconicity is better understood as a structured mapping between two mental representations than as a link between linguistic form and human experience. PMID:25092669

  4. Bodily maps of emotions.

    PubMed

    Nummenmaa, Lauri; Glerean, Enrico; Hari, Riitta; Hietanen, Jari K

    2014-01-14

    Emotions are often felt in the body, and somatosensory feedback has been proposed to trigger conscious emotional experiences. Here we reveal maps of bodily sensations associated with different emotions using a unique topographical self-report method. In five experiments, participants (n = 701) were shown two silhouettes of bodies alongside emotional words, stories, movies, or facial expressions. They were asked to color the bodily regions whose activity they felt increasing or decreasing while viewing each stimulus. Different emotions were consistently associated with statistically separable bodily sensation maps across experiments. These maps were concordant across West European and East Asian samples. Statistical classifiers distinguished emotion-specific activation maps accurately, confirming independence of topographies across emotions. We propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps. Perception of these emotion-triggered bodily changes may play a key role in generating consciously felt emotions.

  5. Local adaptive tone mapping for video enhancement

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Dai, Min (.

    2015-03-01

    As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget

  6. Harmonic generation at high intensities

    SciTech Connect

    Schafer, K.J.; Krause, J.L.; Kulander, K.C.

    1993-06-01

    Atomic electrons subject to intense laser fields can absorb many photons, leading either to multiphoton ionization or the emission of a single, energetic photon which can be a high multiple of the laser frequency. The latter process, high-order harmonic generation, has been observed experimentally using a range of laser wavelengths and intensities over the past several years. Harmonic generation spectra have a generic form: a steep decline for the low order harmonics, followed by a plateau extending to high harmonic order, and finally an abrupt cutoff beyond which no harmonics are discernible. During the plateau the harmonic production is a very weak function of the process order. Harmonic generation is a promising source of coherent, tunable radiation in the XUV to soft X-ray range which could have a variety of scientific and possibly technological applications. Its conversion from an interesting multiphoton phenomenon to a useful laboratory radiation source requires a complete understanding of both its microscopic and macroscopic aspects. We present some recent results on the response of single atoms at intensities relevant to the short pulse experiments. The calculations employ time-dependent methods, which we briefly review in the next section. Following that we discuss the behavior of the harmonics as a function of laser intensity. Two features are notable: the slow scaling of the harmonic intensities with laser intensity, and the rapid variation in the phase of the individual harmonics with respect to harmonic order. We then give a simple empirical formula that predicts the extent of the plateau for a given ionization potential, wavelength and intensity.

  7. Getting Results with Curriculum Mapping

    ERIC Educational Resources Information Center

    Jacobs, Heidi Hayes

    2004-01-01

    This helpful resource will speed the mapping effort along and apply curriculum mapping to special situations. In this book teachers and administrators offer concrete advice on how to get the most out of curriculum mapping in districts and schools: (1) Steps to implementing mapping procedures and leading the mapping process; (2) Tools and resources…

  8. Mapping surface soil moisture using an aircraft-based passive microwave instrument: algorithm and example

    NASA Astrophysics Data System (ADS)

    Jackson, T. J.; Le Vine, David E.

    1996-10-01

    Microwave remote sensing at L-band (21 cm wavelength) can provide a direct measurement of the surface soil moisture for a range of cover conditions and within reasonable error bounds. Surface soil moisture observations are rare and, therefore, the use of these data in hydrology and other disciplines has not been fully explored or developed. Without satellite-based observing systems, the only way to collect these data in large-scale studies is with an aircraft platform. Recently, aircraft systems such as the push broom microwave radiometer (PBMR) and the electronically scanned thinned array radiometer (ESTAR) have been developed to facilitate such investigations. In addition, field experiments have attempted to collect the passive microwave data as part of an integrated set of hydrologic data. One of the most ambitious of these investigations was the Washita'92 experiment. Preliminary analysis of these data has shown that the microwave observations are indicative of deterministic spatial and temporal variations in the surface soil moisture. Users of these data should be aware of a number of issues related to using aircraft-based systems and practical approaches to applying soil moisture estimation algorithms to large data sets. This paper outlines the process of mapping surface soil moisture from an aircraft-based passive microwave radiometer system for the Washita'92 experiment.

  9. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  10. Mapping Surface Features Produced by an Active Landslide

    NASA Astrophysics Data System (ADS)

    Parise, Mario; Gueguen, Erwan; Vennari, Carmela

    2016-10-01

    A large landslide reactivated on December 2013, at Montescaglioso, southern Italy, after 56 hours of rainfall. The landslide disrupted over 500 m of a freeway, involved a few warehouses, a supermarket, and private homes. After the event, it has been p