Science.gov

Sample records for 21-cm intensity mapping

  1. Intensity Mapping During Reionization: 21 cm and Cross-correlations

    NASA Astrophysics Data System (ADS)

    Aguirre, James E.; HERA Collaboration

    2016-01-01

    The first generation of 21 cm epoch of reionization (EoR) experiments are now reaching the sensitivities necessary for a detection of the power spectrum of plausible reionization models, and with the advent of next-generation capabilities (e.g. the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometer Array Phase I Low) will move beyond the power spectrum to imaging of the EoR intergalactic medium. Such datasets provide context to galaxy evolution studies for the earliest galaxies on scales of tens of Mpc, but at present wide, deep galaxy surveys are lacking, and attaining the depth to survey the bulk of galaxies responsible for reionization will be challenging even for JWST. Thus we seek useful cross-correlations with other more direct tracers of the galaxy population. I review near-term prospects for cross-correlation studies with 21 cm and CO and CII emission, as well as future far-infrared misions suchas CALISTO.

  2. Prospects of probing quintessence with HI 21-cm intensity mapping survey

    NASA Astrophysics Data System (ADS)

    Hussain, Azam; Thakur, Shruti; Sarkar, Tapomoy Guha; Sen, Anjan A.

    2016-09-01

    We investigate the prospect of constraining scalar field dark energy models using HI 21-cm intensity mapping surveys. We consider a wide class of coupled scalar field dark energy models whose predictions about the background cosmological evolution are different from the ΛCDM predictions by a few percent. We find that these models can be statistically distinguished from ΛCDM through their imprint on the 21-cm angular power spectrum. At the fiducial z = 1.5, corresponding to a radio interferometric observation of the post-reionization HI 21 cm observation at frequency 568 MHz, these models can infact be distinguished from the ΛCDM model at SNR > 3σ level using a 10,000 hr radio observation distributed over 40 pointings of a SKA1-mid like radio-telescope. We also show that tracker models are more likely to be ruled out in comparison with ΛCDM than the thawer models. Future radio observations can be instrumental in obtaining tighter constraints on the parameter space of dark energy models and supplement the bounds obtained from background studies.

  3. An intensity map of hydrogen 21-cm emission at redshift z approximately 0.8.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Bandura, Kevin; Peterson, Jeffrey B

    2010-07-22

    Observations of 21-cm radio emission by neutral hydrogen at redshifts z approximately 0.5 to approximately 2.5 are expected to provide a sensitive probe of cosmic dark energy. This is particularly true around the onset of acceleration at z approximately 1, where traditional optical cosmology becomes very difficult because of the infrared opacity of the atmosphere. Hitherto, 21-cm emission has been detected only to z = 0.24. More distant galaxies generally are too faint for individual detections but it is possible to measure the aggregate emission from many unresolved galaxies in the 'cosmic web'. Here we report a three-dimensional 21-cm intensity field at z = 0.53 to 1.12. We then co-add neutral-hydrogen (H i) emission from the volumes surrounding about 10,000 galaxies (from the DEEP2 optical galaxy redshift survey). We detect the aggregate 21-cm glow at a significance of approximately 4sigma.

  4. Warm dark matter signatures on the 21cm power spectrum: intensity mapping forecasts for SKA

    SciTech Connect

    Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo; Lapi, Andrea E-mail: villaescusa@oats.inaf.it E-mail: lapi@sissa.it

    2015-07-01

    We investigate the impact that warm dark matter (WDM) has in terms of 21 cm intensity mapping in the post-reionization Universe at z=3−5. We perform hydrodynamic simulations for 5 different models: cold dark matter and WDM with 1,2,3,4 keV (thermal relic) mass and assign the neutral hydrogen a-posteriori using two different methods that both reproduce observations in terms of column density distribution function of neutral hydrogen systems. Contrary to naive expectations, the suppression of power present in the linear and non-linear matter power spectra, results in an increase of power in terms of neutral hydrogen and 21 cm power spectra. This is due to the fact that there is a lack of small mass halos in WDM models with respect to cold dark matter: in order to distribute a total amount of neutral hydrogen within the two cosmological models, a larger quantity has to be placed in the most massive halos, that are more biased compared to the cold dark matter cosmology. We quantify this effect and address significance for the telescope SKA1-LOW, including a realistic noise modeling. The results indicate that we will be able to rule out a 4 keV WDM model with 5000 hours of observations at z>3, with a statistical significance of >3 σ, while a smaller mass of 3 keV, comparable to present day constraints, can be ruled out at more than 2 σ confidence level with 1000 hours of observations at z>5.

  5. An intensity map of hydrogen 21-cm emission at redshift z approximately 0.8.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Bandura, Kevin; Peterson, Jeffrey B

    2010-07-22

    Observations of 21-cm radio emission by neutral hydrogen at redshifts z approximately 0.5 to approximately 2.5 are expected to provide a sensitive probe of cosmic dark energy. This is particularly true around the onset of acceleration at z approximately 1, where traditional optical cosmology becomes very difficult because of the infrared opacity of the atmosphere. Hitherto, 21-cm emission has been detected only to z = 0.24. More distant galaxies generally are too faint for individual detections but it is possible to measure the aggregate emission from many unresolved galaxies in the 'cosmic web'. Here we report a three-dimensional 21-cm intensity field at z = 0.53 to 1.12. We then co-add neutral-hydrogen (H i) emission from the volumes surrounding about 10,000 galaxies (from the DEEP2 optical galaxy redshift survey). We detect the aggregate 21-cm glow at a significance of approximately 4sigma. PMID:20651685

  6. Cross-correlating 21cm intensity maps with Lyman Break Galaxies in the post-reionization era

    SciTech Connect

    Villaescusa-Navarro, Francisco; Viel, Matteo; Alonso, David; Datta, Kanan K.; Santos, Mário G. E-mail: viel@oats.inaf.it E-mail: kanan@ncra.tifr.res.in E-mail: mgrsantos@uwc.ac.za

    2015-03-01

    We investigate the cross-correlation between the spatial distribution of Lyman Break Galaxies (LBGs) and the 21cm intensity mapping signal at z∼[3–5]. At these redshifts, galactic feedback is supposed to only marginally affect the matter power spectrum, and the neutral hydrogen distribution is independently constrained by quasar spectra. Using a high resolution N-body simulation, populated with neutral hydrogen a posteriori, we forecast for the expected LBG-21cm cross-spectrum and its error for a 21cm field observed by the Square Kilometre Array (SKA1-LOW and SKA1-MID), combined with a spectroscopic LBG survey with the same volume. The cross power can be detected with a signal-to-noise ratio (SNR) up to ∼10 times higher (and down to ∼ 4 times smaller scales) than the 21cm auto-spectrum for this set-up, with the SNR depending only very weakly on redshift and the LBG population. We also show that while both the 21cm auto- and LBG-21cm cross-spectra can be reliably recovered after the cleaning of smooth-spectrum foreground contamination, only the cross-power is robust to problematic non-smooth foregrounds like polarized synchrotron emission.

  7. Cross-correlation cosmography with intensity mapping of the neutral hydrogen 21 cm emission

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.

    2015-11-01

    The cross-correlation of a foreground density field with two different background convergence fields can be used to measure cosmographic distance ratios and constrain dark energy parameters. We investigate the possibility of performing such measurements using a combination of optical galaxy surveys and neutral hydrogen (HI) intensity mapping surveys, with emphasis on the performance of the planned Square Kilometre Array (SKA). Using HI intensity mapping to probe the foreground density tracer field and/or the background source fields has the advantage of excellent redshift resolution and a longer lever arm achieved by using the lensing signal from high redshift background sources. Our results show that, for our best SKA-optical configuration of surveys, a constant equation of state for dark energy can be constrained to ≃8 % for a sky coverage fsky=0.5 and assuming a σ (ΩDE)=0.03 prior for the dark energy density parameter. We also show that using the cosmic microwave background as the second source plane is not competitive, even when considering a COrE-like satellite.

  8. Cosmology on ultralarge scales with intensity mapping of the neutral hydrogen 21 cm emission: limits on primordial non-Gaussianity.

    PubMed

    Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís

    2013-10-25

    The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.

  9. Near-term measurements with 21 cm intensity mapping: Neutral hydrogen fraction and BAO at z<2

    SciTech Connect

    Masui, Kiyoshi Wesley; McDonald, Patrick; Pen, Ue-Li

    2010-05-15

    It is shown that 21 cm intensity mapping could be used in the near term to make cosmologically useful measurements. Large scale structure could be detected using existing radio telescopes, or using prototypes for dedicated redshift survey telescopes. This would provide a measure of the mean neutral hydrogen density, using redshift space distortions to break the degeneracy with the linear bias. We find that with only 200 hours of observing time on the Green Bank Telescope, the neutral hydrogen density could be measured to 25% precision at redshift 0.54

  10. Mapping Cosmic Structure Using 21-cm Hydrogen Signal at Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; GBT 21-cm Intensity Mapping Group

    2011-05-01

    We are using the Green Bank Telescope to make 21-cm intensity maps of cosmic structure in a 0.15 Gpc^3 box at redshift of z 1. The intensity mapping technique combines the flux from many galaxies in each pixel, allowing much greater mapping speed than the traditional redshift survey. Measurement is being made at z 1 to take advantage of a window in frequency around 700 MHz where terrestrial radio frequency interference (RFI) is currently at a minimum. This minimum is due to a reallocation of this frequency band from analog television to wide area wireless internet and public service usage. We will report progress of our attempt to detect autocorrelation of the 21-cm signal. The ultimate goal of this mapping is to use Baryon Acoustic Oscillations to provide more precise constraints to dark energy models.

  11. MEASURING BARYON ACOUSTIC OSCILLATIONS ON 21 cm INTENSITY FLUCTUATIONS AT MODERATE REDSHIFTS

    SciTech Connect

    Mao Xiaochun

    2012-06-20

    After reionization, emission in the 21 cm hyperfine transition provides a direct probe of neutral hydrogen distributed in galaxies. Different from galaxy redshift surveys, observation of baryon acoustic oscillations in the cumulative 21 cm emission may offer an attractive method for constraining dark energy properties at moderate redshifts. Keys to this program are techniques to extract the faint cosmological signal from various contaminants, such as detector noise and continuum foregrounds. In this paper, we investigate the possible systematic and statistical errors in the acoustic scale estimates using ground-based radio interferometers. Based on the simulated 21 cm interferometric measurements, we analyze the performance of a Fourier-space, light-of-sight algorithm in subtracting foregrounds, and further study the observing strategy as a function of instrumental configurations. Measurement uncertainties are presented from a suite of simulations with a variety of parameters, in order to have an estimate of what behaviors will be accessible in the future generation of hydrogen surveys. We find that 10 separate interferometers, each of which contains {approx}300 dishes, observing an independent patch of the sky and producing an instantaneous field of view (FOV) of {approx}100 deg{sup 2}, can be used to make a significant detection of acoustic features over a period of a few years. Compared to optical surveys, the broad bandwidth, wide FOV, and multi-beam observation are all unprecedented capabilities of low-frequency radio experiments.

  12. MAPPING THE DYNAMICS OF COLD GAS AROUND SGR A* THROUGH 21 cm ABSORPTION

    SciTech Connect

    Christian, Pierre; Loeb, Abraham

    2015-11-20

    The presence of a circumnuclear stellar disk around Sgr A* and megamaser systems near other black holes indicates that dense neutral disks can be found in galactic nuclei. We show that depending on their inclination angle, optical depth, and spin temperature, these disks could be observed spectroscopically through 21 cm absorption. Related spectroscopic observations of Sgr A* can determine its HI disk parameters and the possible presence of gaps in the disk. Clumps of dense gas similar to the G2 could could also be detected in 21 cm absorption against Sgr A* radio emission.

  13. The impact of anisotropy from finite light traveltime on detecting ionized bubbles in redshifted 21-cm maps

    NASA Astrophysics Data System (ADS)

    Majumdar, Suman; Bharadwaj, Somnath; Datta, Kanan K.; Choudhury, T. Roy

    2011-05-01

    The detection of ionized bubbles around quasars in redshifted 21-cm maps is possibly one of the most direct future probes of reionization. We consider two models for the growth of spherical ionized bubbles to study the apparent shapes of the bubbles in redshifted 21-cm maps, taking into account the finite light traveltime (FLTT) across the bubble. In both models, the bubble has a period of rapid growth beyond which its radius either saturates or grows slowly. We find that the FLTT, whose effect is particularly pronounced for large bubbles, causes the bubble’s image to continue to grow well after its actual growth is over. There are two distinct FLTT distortions in the bubble’s image: (i) its apparent centre is shifted along the line of sight (LOS) towards the observer from the quasar and (ii) it is anisotropic along the LOS. The bubble initially appears elongated along the LOS. This is reversed in the later stages of growth where the bubble appears compressed. The FLTT distortions are expected to have an impact on matched filter bubble detection where it is most convenient to use a spherical template for the filter. We find that the best matched spherical filter gives a reasonably good estimate of the size and the shift in the centre of the anisotropic image. The mismatch between the spherical filter and the anisotropic image causes a degradation in the signal-to-noise ratio relative to that of a spherical bubble. The degradation is in the range 10-20 per cent during the period of rapid growth when the image appears elongated and is less than 10 per cent in the later stages when the image appears compressed. We conclude that a spherical filter is adequate for bubble detection. The FLTT distortions do not affect the lower limits for bubble detection with 1000 h of GMRT observations. The smallest spherical filter for which a detection is possible has comoving radii 24 and 33 Mpc for 3σ and 5σ detections, respectively, assuming a neutral fraction 0.6 at z˜ 8.

  14. 21 Cm Tomography With the Alfalfa Survey

    NASA Astrophysics Data System (ADS)

    Fry, Alexander B.; Boutan, C.; Carroll, P. A.; Hazelton, B.; Morales, M. F.

    2011-01-01

    Neutral hydrogen (HI) 21cm intensity mapping, or HI tomography is a promising technique being utilized by several upcoming experiments (LOFAR, MWA, SKA). The measurement of volume averaged neutral hydrogen mass density in synoptic sky surveys can be applied to the study of the HI mass function, the distribution of large scale structure, the reionization of the universe, and the expansion history of the universe through such standard rulers as baryonic acoustic oscillations. In order to prepare for future experiments, in particular the Murchison Widefield Array (MWA), we analyze the Arecbo Legacy Fast ALFA (Arecibo L-Band Feed Array) Feed Array (ALFALFA) survey data to probe the spatial density variations of HI in our local universe (z <0.06) where data is currently available. We address challenges unique to data of this kind, such as identifying and subtracting out signal from RFI and local galactic sources, and characterizing the ALFA array beam pattern which dictates sensitivity and resolution.

  15. Combining galaxy and 21-cm surveys

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.; White, Martin; Chang, Tzu-Ching; Holder, Gil; Padmanabhan, Nikhil; Doré, Olivier

    2016-04-01

    Acoustic waves travelling through the early Universe imprint a characteristic scale in the clustering of galaxies, QSOs and intergalactic gas. This scale can be used as a standard ruler to map the expansion history of the Universe, a technique known as baryon acoustic oscillations (BAO). BAO offer a high-precision, low-systematics means of constraining our cosmological model. The statistical power of BAO measurements can be improved if the `smearing' of the acoustic feature by non-linear structure formation is undone in a process known as reconstruction. In this paper, we use low-order Lagrangian perturbation theory to study the ability of 21-cm experiments to perform reconstruction and how augmenting these surveys with galaxy redshift surveys at relatively low number densities can improve performance. We find that the critical number density which must be achieved in order to benefit 21-cm surveys is set by the linear theory power spectrum near its peak, and corresponds to densities achievable by upcoming surveys of emission line galaxies such as eBOSS and DESI. As part of this work, we analyse reconstruction within the framework of Lagrangian perturbation theory with local Lagrangian bias, redshift-space distortions, {k}-dependent noise and anisotropic filtering schemes.

  16. Modelling the cosmic neutral hydrogen from DLAs and 21-cm observations

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Hamsa; Choudhury, T. Roy; Refregier, Alexandre

    2016-05-01

    We review the analytical prescriptions in the literature to model the 21-cm (emission line surveys/intensity mapping experiments) and Damped Lyman-Alpha (DLA) observations of neutral hydrogen (H I) in the post-reionization universe. While these two sets of prescriptions have typically been applied separately for the two probes, we attempt to connect these approaches to explore the consequences for the distribution and evolution of H I across redshifts. We find that a physically motivated, 21-cm-based prescription, extended to account for the DLA observables provides a good fit to the majority of the available data, but cannot accommodate the recent measurement of the clustering of DLAs at z ˜ 2.3. This highlights a tension between the DLA bias and the 21-cm measurements, unless there is a very significant change in the nature of H I-bearing systems across redshifts 0-3. We discuss the implications of our findings for the characteristic host halo masses of the DLAs and the power spectrum of 21-cm intensity fluctuations.

  17. Probing lepton asymmetry with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2014-09-01

    We investigate the issue of how accurately we can constrain the lepton number asymmetry ξ{sub ν}=μ{sub ν}/T{sub ν} in the Universe by using future observations of 21 cm line fluctuations and cosmic microwave background (CMB). We find that combinations of the 21 cm line and the CMB observations can constrain the lepton asymmetry better than big-bang nucleosynthesis (BBN). Additionally, we also discuss constraints on ξ{sub ν} in the presence of some extra radiation, and show that the 21 cm line observations can substantially improve the constraints obtained by CMB alone, and allow us to distinguish the effects of the lepton asymmetry from the ones of extra radiation.

  18. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  19. Constraining dark matter through 21-cm observations

    NASA Astrophysics Data System (ADS)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  20. Baryon Acoustic Oscillation Intensity Mapping of Dark Energy

    NASA Astrophysics Data System (ADS)

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B.; McDonald, Patrick

    2008-03-01

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called “dark energy.” To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 109 individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  1. Lensing of 21-cm fluctuations by primordial gravitational waves.

    PubMed

    Book, Laura; Kamionkowski, Marc; Schmidt, Fabian

    2012-05-25

    Weak-gravitational-lensing distortions to the intensity pattern of 21-cm radiation from the dark ages can be decomposed geometrically into curl and curl-free components. Lensing by primordial gravitational waves induces a curl component, while the contribution from lensing by density fluctuations is strongly suppressed. Angular fluctuations in the 21-cm background extend to very small angular scales, and measurements at different frequencies probe different shells in redshift space. There is thus a huge trove of information with which to reconstruct the curl component of the lensing field, allowing tensor-to-scalar ratios conceivably as small as r~10(-9)-far smaller than those currently accessible-to be probed. PMID:23003237

  2. Redundant Array Configurations for 21 cm Cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Parsons, Aaron R.

    2016-08-01

    Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.

  3. Angular 21 cm power spectrum of a scaling distribution of cosmic string wakes

    SciTech Connect

    Hernández, Oscar F.; Wang, Yi; Brandenberger, Robert; Fong, José E-mail: wangyi@physics.mcgill.ca E-mail: jose.fong@ens-lyon.fr

    2011-08-01

    Cosmic string wakes lead to a large signal in 21 cm redshift maps at redshifts larger than that corresponding to reionization. Here, we compute the angular power spectrum of 21 cm radiation as predicted by a scaling distribution of cosmic strings whose wakes have undergone shock heating.

  4. Overcoming the Challenges of 21cm Cosmology

    NASA Astrophysics Data System (ADS)

    Pober, Jonathan

    The highly-redshifted 21cm line of neutral hydrogen is one of the most promising and unique probes of cosmology for the next decade and beyond. The past few years have seen a number of dedicated experiments targeting the 21cm signal from the Epoch of Reionization (EoR) begin operation, including the LOw-Frequency ARray (LOFAR), the Murchison Widefield Array (MWA), and the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). For these experiments to yield cosmological results, they require new calibration and analysis algorithms which will need to achieve unprecedented levels of separation between the 21cm signal and contaminating foreground emission. Although much work has been spent developing these algorithms over the past decade, their success or failure will ultimately depend on their ability to overcome the complications associated with real-world systems and their inherent complications. The work in this dissertation is closely tied to the late-stage commissioning and early observations with PAPER. The first two chapters focus on developing calibration algorithms to overcome unique problems arising in the PAPER system. To test these algorithms, I rely on not only simulations, but on commissioning observations, ultimately tying the success of the algorithm to its performance on actual, celestial data. The first algorithm works to correct gain-drifts in the PAPER system caused by the heating and cooling of various components (the amplifiers and above ground co-axial cables, in particular). It is shown that a simple measurement of the ambient temperature can remove ˜ 10% gain fluctuations in the observed brightness of calibrator sources. This result is highly encouraging for the ability of PAPER to remove a potentially dominant systematic in its power spectrum and cataloging measurements without resorting to a complicated system overhaul. The second new algorithm developed in this dissertation solves a major calibration challenge not

  5. BRIGHT SOURCE SUBTRACTION REQUIREMENTS FOR REDSHIFTED 21 cm MEASUREMENTS

    SciTech Connect

    Datta, A.; Bowman, J. D.; Carilli, C. L.

    2010-11-20

    The H I 21 cm transition line is expected to be an important probe into the cosmic dark ages and epoch of reionization. Foreground source removal is one of the principal challenges for the detection of this signal. This paper investigates the extragalactic point source contamination and how accurately bright sources ({approx}>1 Jy) must be removed in order to detect 21 cm emission with upcoming radio telescopes such as the Murchison Widefield Array. We consider the residual contamination in 21 cm maps and power spectra due to position errors in the sky model for bright sources, as well as frequency-independent calibration errors. We find that a source position accuracy of 0.1 arcsec will suffice for detection of the H I power spectrum. For calibration errors, 0.05% accuracy in antenna gain amplitude is required in order to detect the cosmic signal. Both sources of subtraction error produce residuals that are localized to small angular scales, k{sub perpendicular} {approx}> 0.05 Mpc{sup -1}, in the two-dimensional power spectrum.

  6. The 21 cm signature of cosmic string wakes

    SciTech Connect

    Brandenberger, Robert H.; Danos, Rebecca J.; Hernández, Oscar F.; Holder, Gilbert P. E-mail: rjdanos@physics.mcgill.ca E-mail: holder@physics.mcgill.ca

    2010-12-01

    We discuss the signature of a cosmic string wake in 21cm redshift surveys. Since 21cm surveys probe higher redshifts than optical large-scale structure surveys, the signatures of cosmic strings are more manifest in 21cm maps than they are in optical galaxy surveys. We find that, provided the tension of the cosmic string exceeds a critical value (which depends on both the redshift when the string wake is created and the redshift of observation), a cosmic string wake will generate an emission signal with a brightness temperature which approaches a limiting value which at a redshift of z+1 = 30 is close to 400 mK in the limit of large string tension. The signal will have a specific signature in position space: the excess 21cm radiation will be confined to a wedge-shaped region whose tip corresponds to the position of the string, whose planar dimensions are set by the planar dimensions of the string wake, and whose thickness (in redshift direction) depends on the string tension. For wakes created at z{sub i}+1 = 10{sup 3}, then at a redshift of z+1 = 30 the critical value of the string tension μ is Gμ = 6 × 10{sup −7}, and it decreases linearly with redshift (for wakes created at the time of equal matter and radiation, the critical value is a factor of two lower at the same redshift). For smaller tensions, cosmic strings lead to an observable absorption signal with the same wedge geometry.

  7. The cross correlation between the 21-cm radiation and the CMB lensing field: a new cosmological signal

    SciTech Connect

    Vallinotto, Alberto

    2011-01-01

    The measurement of Baryon Acoustic Oscillations through the 21-cm intensity mapping technique at redshift z {<=} 4 has the potential to tightly constrain the evolution of dark energy. Crucial to this experimental effort is the determination of the biasing relation connecting fluctuations in the density of neutral hydrogen (HI) with the ones of the underlying dark matter field. In this work I show how the HI bias relevant to these 21-cm intensity mapping experiments can successfully be measured by cross-correlating their signal with the lensing signal obtained from CMB observations. In particular I show that combining CMB lensing maps from Planck with 21-cm field measurements carried out with an instrument similar to the Cylindrical Radio Telescope, this cross-correlation signal can be detected with a signal-to-noise (S/N) ratio of more than 5. Breaking down the signal arising from different redshift bins of thickness {Delta}z = 0.1, this signal leads to constraining the large scale neutral hydrogen bias and its evolution to 4{sigma} level.

  8. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)

    NASA Astrophysics Data System (ADS)

    Vanderlinde, Keith; Chime Collaboration

    2014-04-01

    Hydrogen Intensity (HI) mapping uses redshifted 21cm emission from neutral hydrogen as a 3D tracer of Large Scale Structure (LSS) in the Universe. Imprinted in the LSS is a remnant of the acoustic waves which propagated through the primordial plasma. This feature, the Baryon Acoustic Oscillation (BAO), has a characteristic scale of ~150 co-moving Mpc, which appears in the spatial correlation of LSS. By charting the evolution of this scale over cosmic time, we trace the expansion history of the Universe, constraining the Dark Energy equation of state as it becomes a significant component, particularly at redshifts poorly probed by current BAO surveys. In this talk I will introduce CHIME, a transit radio interferometer designed specifically for this purpose. CHIME is an ambitious new telescope, being built in British Columbia, Canada, and composed of five 20m x 100m parabolic reflectors which focus radiation in one direction (east-west) while interferometry is used to resolve beams in the other (north-south). Earth rotation sweeps them across the sky, resulting in complete daily coverage of the northern celestial hemisphere. Commissioning is underway on the 40 x 37m "Pathfinder" telescope, and the full sized 100m x 100m instrument is funded and under development.

  9. MEASUREMENT OF 21 cm BRIGHTNESS FLUCTUATIONS AT z {approx} 0.8 IN CROSS-CORRELATION

    SciTech Connect

    Masui, K. W.; Switzer, E. R.; Calin, L.-M.; Pen, U.-L.; Shaw, J. R.; Banavar, N.; Bandura, K.; Blake, C.; Chang, T.-C.; Liao, Y.-W.; Chen, X.; Li, Y.-C.; Natarajan, A.; Peterson, J. B.; Voytek, T. C.

    2013-01-20

    In this Letter, 21 cm intensity maps acquired at the Green Bank Telescope are cross-correlated with large-scale structure traced by galaxies in the WiggleZ Dark Energy Survey. The data span the redshift range 0.6 < z < 1 over two fields totaling {approx}41 deg. sq. and 190 hr of radio integration time. The cross-correlation constrains {Omega}{sub HI} b{sub HI} r = [0.43 {+-} 0.07(stat.) {+-} 0.04(sys.)] Multiplication-Sign 10{sup -3}, where {Omega}{sub HI} is the neutral hydrogen (H I) fraction, r is the galaxy-hydrogen correlation coefficient, and b{sub HI} is the H I bias parameter. This is the most precise constraint on neutral hydrogen density fluctuations in a challenging redshift range. Our measurement improves the previous 21 cm cross-correlation at z {approx} 0.8 both in its precision and in the range of scales probed.

  10. Prospects for clustering and lensing measurements with forthcoming intensity mapping and optical surveys

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.; Metcalf, R. B.

    2016-06-01

    We explore the potential of using intensity mapping surveys (MeerKAT, SKA) and optical galaxy surveys (DES, LSST) to detect H I clustering and weak gravitational lensing of 21 cm emission in auto- and cross-correlation. Our forecasts show that high-precision measurements of the clustering and lensing signals can be made in the near future using the intensity mapping technique. Such studies can be used to test the intensity mapping method, and constrain parameters such as the H I density Ω _{H I}, the H I bias b_{H I} and the galaxy-H I correlation coefficient r_{H I-g}.

  11. Cross-correlation of the cosmic 21-cm signal and Lyman α emitters during reionization

    NASA Astrophysics Data System (ADS)

    Sobacchi, Emanuele; Mesinger, Andrei; Greig, Bradley

    2016-07-01

    Interferometry of the cosmic 21-cm signal is set to revolutionize our understanding of the Epoch of Reionization (EoR), eventually providing 3D maps of the early Universe. Initial detections however will be low signal to noise, limited by systematics. To confirm a putative 21-cm detection, and check the accuracy of 21-cm data analysis pipelines, it would be very useful to cross-correlate against a genuine cosmological signal. The most promising cosmological signals are wide-field maps of Lyman α emitting galaxies (LAEs), expected from the Subaru Hyper-Suprime Cam ultradeep field (UDF). Here we present estimates of the correlation between LAE maps at z ˜ 7 and the 21-cm signal observed by both the Low Frequency Array (LOFAR) and the planned Square Kilometre Array Phase 1 (SKA1). We adopt a systematic approach, varying both: (i) the prescription of assigning LAEs to host haloes; and (ii) the large-scale structure of neutral and ionized regions (i.e. EoR morphology). We find that the LAE-21cm cross-correlation is insensitive to (i), thus making it a robust probe of the EoR. A 1000 h observation with LOFAR would be sufficient to discriminate at ≳ 1σ a fully ionized Universe from one with a mean neutral fraction of bar{x}_{H I}≈ 0.50, using the LAE-21 cm cross-correlation function on scales of R ≈ 3-10 Mpc. Unlike LOFAR, whose detection of the LAE-21 cm cross-correlation is limited by noise, SKA1 is mostly limited by ignorance of the EoR morphology. However, the planned 100 h wide-field SKA1-Low survey will be sufficient to discriminate an ionized Universe from one with bar{x}_{H I}=0.25, even with maximally pessimistic assumptions.

  12. RESEARCH PAPER: Foreground removal of 21 cm fluctuation with multifrequency fitting

    NASA Astrophysics Data System (ADS)

    He, Li-Ping

    2009-06-01

    The 21 centimeter (21 cm) line emission from neutral hydrogen in the intergalactic medium (IGM) at high redshifts is strongly contaminated by foreground sources such as the diffuse Galactic synchrotron emission and free-free emission from the Galaxy, as well as emission from extragalactic radio sources, thus making its observation very complicated. However, the 21 cm signal can be recovered through its structure in frequency space, as the power spectrum of the foreground contamination is expected to be smooth over a wide band in frequency space while the 21 cm fluctuations vary significantly. We use a simple polynomial fitting to reconstruct the 21 cm signal around four frequencies 50, 100, 150 and 200MHz with an especially small channel width of 20 kHz. Our calculations show that this multifrequency fitting approach can effectively recover the 21 cm signal in the frequency range 100 ~ 200 MHz. However, this method doesn't work well around 50 MHz because of the low intensity of the 21 cm signal at this frequency. We also show that the fluctuation of detector noise can be suppressed to a very low level by taking long integration times, which means that we can reach a sensitivity of approx10 mK at 150 MHz with 40 antennas in 120 hours of observations.

  13. Cross-correlation of 21 cm and soft X-ray backgrounds during the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Liang, Jun-Min; Mao, Xiao-Chun; Qin, Bo

    2016-08-01

    The cross-correlation between the high-redshift 21 cm background and the Soft X-ray Background (SXB) of the Universe may provide an additional probe of the Epoch of Reionization. Here we use semi-numerical simulations to create 21 cm and soft X-ray intensity maps and construct their cross power spectra. Our results indicate that the cross power spectra are sensitive to the thermal and ionizing states of the intergalactic medium (IGM). The 21 cm background correlates positively to the SXB on large scales during the early stages of the reionization. However as the reionization develops, these two backgrounds turn out to be anti-correlated with each other when more than ˜ 15% of the IGM is ionized in a warm reionization scenario. The anti-correlated power reaches its maximum when the neutral fraction declines to 0.2-0.5. Hence, the trough in the cross power spectrum might be a useful tool for tracing the growth of HII regions during the middle and late stages of the reionization. We estimate the detectability of the cross power spectrum based on the abilities of the Square Kilometre Array and the Wide Field X-ray Telescope (WFXT), and find that to detect the cross power spectrum, the pixel noise of X-ray images has to be at least 4 orders of magnitude lower than that of the WFXT deep survey.

  14. Cross-correlation of 21 cm and soft X-ray backgrounds during the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Liang, Jun-Min; Mao, Xiao-Chun; Qin, Bo

    2016-08-01

    The cross-correlation between the high-redshift 21 cm background and the Soft X-ray Background (SXB) of the Universe may provide an additional probe of the Epoch of Reionization. Here we use semi-numerical simulations to create 21 cm and soft X-ray intensity maps and construct their cross power spectra. Our results indicate that the cross power spectra are sensitive to the thermal and ionizing states of the intergalactic medium (IGM). The 21 cm background correlates positively to the SXB on large scales during the early stages of the reionization. However as the reionization develops, these two backgrounds turn out to be anti-correlated with each other when more than ∼ 15% of the IGM is ionized in a warm reionization scenario. The anti-correlated power reaches its maximum when the neutral fraction declines to 0.2–0.5. Hence, the trough in the cross power spectrum might be a useful tool for tracing the growth of HII regions during the middle and late stages of the reionization. We estimate the detectability of the cross power spectrum based on the abilities of the Square Kilometre Array and the Wide Field X-ray Telescope (WFXT), and find that to detect the cross power spectrum, the pixel noise of X-ray images has to be at least 4 orders of magnitude lower than that of the WFXT deep survey.

  15. A comparison of neutral hydrogen 21 cm observations with UV and optical absorption-line measurements

    NASA Technical Reports Server (NTRS)

    Giovanelli, R.; York, D. G.; Shull, J. M.; Haynes, M. P.

    1978-01-01

    Several absorption components detected in visible or UV lines have been identified with emission features in new high-resolution, high signal-to-noise 21 cm observations. Stars for which direct overlap is obtained are HD 28497, lambda Ori, mu Col, HD 50896, rho Leo, HD 93521, and HD 219881. With the use of the inferred H I column densities from 21 cm profiles, rather than the integrated column densities obtained from L-alpha, more reliable densities can be derived from the existence of molecular hydrogen. Hence the cloud thicknesses are better determined; and 21 cm emission maps near these stars can be used to obtain dimensions on the plane of the sky. It is now feasible to derive detailed geometries for isolated clumps of gas which produce visual absorption features.

  16. Precise measurements of primordial power spectrum with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2013-10-01

    We discuss the issue of how precisely we can measure the primordial power spectrum by using future observations of 21 cm fluctuations and cosmic microwave background (CMB). For this purpose, we investigate projected constraints on the quantities characterizing primordial power spectrum: the spectral index n{sub s}, its running α{sub s} and even its higher order running β{sub s}. We show that future 21 cm observations in combinations with CMB would accurately measure above mentioned observables of primordial power spectrum. We also discuss its implications to some explicit inflationary models.

  17. The 21 cm signature of a cosmic string loop

    SciTech Connect

    Pagano, Michael; Brandenberger, Robert E-mail: rhb@physics.mcgill.ca

    2012-05-01

    Cosmic string loops lead to nonlinear baryon overdensities at early times, even before the time which in the standard LCDM model corresponds to the time of reionization. These overdense structures lead to signals in 21 cm redshift surveys at large redshifts. In this paper, we calculate the amplitude and shape of the string loop-induced 21 cm brightness temperature. We find that a string loop leads to a roughly elliptical region in redshift space with extra 21 cm emission. The excess brightness temperature for strings with a tension close to the current upper bound can be as high as 1deg K for string loops generated at early cosmological times (times comparable to the time of equal matter and radiation) and observed at a redshift of z+1 = 30. The angular extent of these predicted 'bright spots' is x{sup '}. These signals should be detectable in upcoming high redshift 21 cm surveys. We also discuss the application of our results to global monopoles and primordial black holes.

  18. The rise of the first stars: Supersonic streaming, radiative feedback, and 21-cm cosmology

    NASA Astrophysics Data System (ADS)

    Barkana, Rennan

    2016-07-01

    between the dark matter and gas. This effect enhanced large-scale clustering and, if early 21-cm fluctuations were dominated by small galactic halos, it produced a prominent pattern on 100 Mpc scales. Work in this field, focused on understanding the whole era of reionization and cosmic dawn with analytical models and numerical simulations, is likely to grow in intensity and importance, as the theoretical predictions are finally expected to confront 21-cm observations in the coming years.

  19. Precision measurement of cosmic magnification from 21 cm emitting galaxies

    SciTech Connect

    Zhang, Pengjie; Pen, Ue-Li; /Canadian Inst. Theor. Astrophys.

    2005-04-01

    We show how precision lensing measurements can be obtained through the lensing magnification effect in high redshift 21cm emission from galaxies. Normally, cosmic magnification measurements have been seriously complicated by galaxy clustering. With precise redshifts obtained from 21cm emission line wavelength, one can correlate galaxies at different source planes, or exclude close pairs to eliminate such contaminations. We provide forecasts for future surveys, specifically the SKA and CLAR. SKA can achieve percent precision on the dark matter power spectrum and the galaxy dark matter cross correlation power spectrum, while CLAR can measure an accurate cross correlation power spectrum. The neutral hydrogen fraction was most likely significantly higher at high redshifts, which improves the number of observed galaxies significantly, such that also CLAR can measure the dark matter lensing power spectrum. SKA can also allow precise measurement of lensing bispectrum.

  20. The future of primordial features with 21 cm tomography

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Meerburg, P. Daniel; Münchmeyer, Moritz

    2016-09-01

    Detecting a deviation from a featureless primordial power spectrum of fluctuations would give profound insight into the physics of the primordial Universe. Depending on their nature, primordial features can either provide direct evidence for the inflation scenario or pin down details of the inflation model. Thus far, using the cosmic microwave background (CMB) we have only been able to put stringent constraints on the amplitude of features, but no significant evidence has been found for such signals. Here we explore the limit of the experimental reach in constraining such features using 21 cm tomography at high redshift. A measurement of the 21 cm power spectrum from the Dark Ages is generally considered as the ideal experiment for early Universe physics, with potentially access to a large number of modes. We consider three different categories of theoretically motivated models: the sharp feature models, resonance models, and standard clock models. We study the improvements on bounds on features as a function of the total number of observed modes and identify parameter degeneracies. The detectability depends critically on the amplitude, frequency and scale-location of the features, as well as the angular and redshift resolution of the experiment. We quantify these effects by considering different fiducial models. Our forecast shows that a cosmic variance limited 21 cm experiment measuring fluctuations in the redshift range 30 <= z <= 100 with a 0.01-MHz bandwidth and sub-arcminute angular resolution could potentially improve bounds by several orders of magnitude for most features compared to current Planck bounds. At the same time, 21 cm tomography also opens up a unique window into features that are located on very small scales.

  1. The wedge bias in reionization 21-cm power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Jensen, Hannes; Majumdar, Suman; Mellema, Garrelt; Lidz, Adam; Iliev, Ilian T.; Dixon, Keri L.

    2016-02-01

    A proposed method for dealing with foreground emission in upcoming 21-cm observations from the epoch of reionization is to limit observations to an uncontaminated window in Fourier space. Foreground emission can be avoided in this way, since it is limited to a wedge-shaped region in k∥, k⊥ space. However, the power spectrum is anisotropic owing to redshift-space distortions from peculiar velocities. Consequently, the 21-cm power spectrum measured in the foreground avoidance window - which samples only a limited range of angles close to the line-of-sight direction - differs from the full redshift-space spherically averaged power spectrum which requires an average over all angles. In this paper, we calculate the magnitude of this `wedge bias' for the first time. We find that the bias amplifies the difference between the real-space and redshift-space power spectra. The bias is strongest at high redshifts, where measurements using foreground avoidance will overestimate the redshift-space power spectrum by around 100 per cent, possibly obscuring the distinctive rise and fall signature that is anticipated for the spherically averaged 21-cm power spectrum. In the later stages of reionization, the bias becomes negative, and smaller in magnitude (≲20 per cent).

  2. A Bayesian analysis of redshifted 21-cm H I signal and foregrounds: simulations for LOFAR

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhik; Koopmans, Léon V. E.; Chapman, E.; Jelić, V.

    2015-09-01

    Observations of the epoch of reionization (EoR) using the 21-cm hyperfine emission of neutral hydrogen (H I) promise to open an entirely new window on the formation of the first stars, galaxies and accreting black holes. In order to characterize the weak 21-cm signal, we need to develop imaging techniques that can reconstruct the extended emission very precisely. Here, we present an inversion technique for LOw Frequency ARray (LOFAR) baselines at the North Celestial Pole (NCP), based on a Bayesian formalism with optimal spatial regularization, which is used to reconstruct the diffuse foreground map directly from the simulated visibility data. We notice that the spatial regularization de-noises the images to a large extent, allowing one to recover the 21-cm power spectrum over a considerable k⊥-k∥ space in the range 0.03 Mpc-1 < k⊥ < 0.19 Mpc-1 and 0.14 Mpc-1 < k∥ < 0.35 Mpc-1 without subtracting the noise power spectrum. We find that, in combination with using generalized morphological component analysis (GMCA), a non-parametric foreground removal technique, we can mostly recover the spherical average power spectrum within 2σ statistical fluctuations for an input Gaussian random root-mean-square noise level of 60 mK in the maps after 600 h of integration over a 10-MHz bandwidth.

  3. Measuring the Cosmological 21 cm Monopole with an Interferometer

    NASA Astrophysics Data System (ADS)

    Presley, Morgan E.; Liu, Adrian; Parsons, Aaron R.

    2015-08-01

    A measurement of the cosmological 21 {cm} signal remains a promising but as-of-yet unattained ambition of radio astronomy. A positive detection would provide direct observations of key unexplored epochs of our cosmic history, including the cosmic dark ages and reionization. In this paper, we concentrate on measurements of the spatial monopole of the 21 {cm} brightness temperature as a function of redshift (the “global signal”). Most global experiments to date have been single-element experiments. In this paper, we show how an interferometer can be designed to be sensitive to the monopole mode of the sky, thus providing an alternate approach to accessing the global signature. We provide simple rules of thumb for designing a global signal interferometer and use numerical simulations to show that a modest array of tightly packed antenna elements with moderately sized primary beams (FWHM of ∼ 40^\\circ ) can compete with typical single-element experiments in their ability to constrain phenomenological parameters pertaining to reionization and the pre-reionization era. We also provide a general data analysis framework for extracting the global signal from interferometric measurements (with analysis of single-element experiments arising as a special case) and discuss trade-offs with various data analysis choices. Given that interferometric measurements are able to avoid a number of systematics inherent in single-element experiments, our results suggest that interferometry ought to be explored as a complementary way to probe the global signal.

  4. Probing patchy reionization through τ-21 cm correlation statistics

    SciTech Connect

    Meerburg, P. Daniel; Spergel, David N.; Dvorkin, Cora E-mail: dns@astro.princeton.edu

    2013-12-20

    We consider the cross-correlation between free electrons and neutral hydrogen during the epoch of reionization (EoR). The free electrons are traced by the optical depth to reionization τ, while the neutral hydrogen can be observed through 21 cm photon emission. As expected, this correlation is sensitive to the detailed physics of reionization. Foremost, if reionization occurs through the merger of relatively large halos hosting an ionizing source, the free electrons and neutral hydrogen are anticorrelated for most of the reionization history. A positive contribution to the correlation can occur when the halos that can form an ionizing source are small. A measurement of this sign change in the cross-correlation could help disentangle the bias and the ionization history. We estimate the signal-to-noise ratio of the cross-correlation using the estimator for inhomogeneous reionization τ-hat {sub ℓm} proposed by Dvorkin and Smith. We find that with upcoming radio interferometers and cosmic microwave background (CMB) experiments, the cross-correlation is measurable going up to multipoles ℓ ∼ 1000. We also derive parameter constraints and conclude that, despite the foregrounds, the cross-correlation provides a complementary measurement of the EoR parameters to the 21 cm and CMB polarization autocorrelations expected to be observed in the coming decade.

  5. Gravitational-wave detection using redshifted 21-cm observations

    SciTech Connect

    Bharadwaj, Somnath; Guha Sarkar, Tapomoy

    2009-06-15

    A gravitational-wave traversing the line of sight to a distant source produces a frequency shift which contributes to redshift space distortion. As a consequence, gravitational waves are imprinted as density fluctuations in redshift space. The gravitational-wave contribution to the redshift space power spectrum has a different {mu} dependence as compared to the dominant contribution from peculiar velocities. This, in principle, allows the two signals to be separated. The prospect of a detection is most favorable at the highest observable redshift z. Observations of redshifted 21-cm radiation from neutral hydrogen hold the possibility of probing very high redshifts. We consider the possibility of detecting primordial gravitational waves using the redshift space neutral hydrogen power spectrum. However, we find that the gravitational-wave signal, though present, will not be detectable on superhorizon scales because of cosmic variance and on subhorizon scales where the signal is highly suppressed.

  6. An H I 21-cm line survey of evolved stars

    NASA Astrophysics Data System (ADS)

    Gérard, E.; Le Bertre, T.; Libert, Y.

    2011-12-01

    The HI line at 21 cm is a tracer of circumstellar matter around AGB stars, and especially of the matter located at large distances (0.1-1 pc) from the central stars. It can give unique information on the kinematics and on the physical conditions in the outer parts of circumstellar shells and in the regions where stellar matter is injected into the interstellar medium. However this tracer has not been much used up to now, due to the difficulty of separating the genuine circumstellar emission from the interstellar one. With the Nançay Radiotelescope we are carrying out a survey of the HI emission in a large sample of evolved stars. We report on recent progresses of this long term programme, with emphasis on S-type stars.

  7. The Murchison Widefield Array 21 cm Power Spectrum Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Hazelton, B. J.; Trott, C. M.; Dillon, Joshua S.; Pindor, B.; Sullivan, I. S.; Pober, J. C.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Ewall-Wice, A.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Neben, A. R.; Thyagarajan, N.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, S.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Tingay, S. J.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-07-01

    We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.

  8. HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting

    NASA Astrophysics Data System (ADS)

    Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni

    2016-06-01

    HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.

  9. Cosmic (Super)String Constraints from 21 cm Radiation

    SciTech Connect

    Khatri, Rishi; Wandelt, Benjamin D.

    2008-03-07

    We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z{>=}30. Future experiments can exploit this effect to constrain the cosmic string tension G{mu} and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of {approx}1 km{sup 2} will not provide any useful constraints, future experiments with a collecting area of 10{sup 4}-10{sup 6} km{sup 2} covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G{mu} > or approx. 10{sup -10}-10{sup -12} (superstring/phase transition mass scale >10{sup 13} GeV)

  10. Cosmic (Super)String Constraints from 21 cm Radiation.

    PubMed

    Khatri, Rishi; Wandelt, Benjamin D

    2008-03-01

    We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z > or =30. Future experiments can exploit this effect to constrain the cosmic string tension G mu and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of approximately 1 km2 will not provide any useful constraints, future experiments with a collecting area of 10(4)-10(6) km2 covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G mu > or = 10(-10)-10(-12) (superstring/phase transition mass scale >10(13) GeV). PMID:18352691

  11. Cosmic 21 cm delensing of microwave background polarization and the minimum detectable energy scale of inflation.

    PubMed

    Sigurdson, Kris; Cooray, Asantha

    2005-11-18

    We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes.

  12. Searching for signatures of cosmic string wakes in 21cm redshift surveys using Minkowski Functionals

    SciTech Connect

    McDonough, Evan; Brandenberger, Robert H. E-mail: rhb@hep.physics.mcgill.ca

    2013-02-01

    Minkowski Functionals are a powerful tool for analyzing large scale structure, in particular if the distribution of matter is highly non-Gaussian, as it is in models in which cosmic strings contribute to structure formation. Here we apply Minkowski functionals to 21cm maps which arise if structure is seeded by a scaling distribution of cosmic strings embeddded in background fluctuations, and then test for the statistical significance of the cosmic string signals using the Fisher combined probability test. We find that this method allows for detection of cosmic strings with Gμ > 5 × 10{sup −8}, which would be improvement over current limits by a factor of about 3.

  13. Intensity Mapping of Molecular Gas During Cosmic Reionization

    NASA Astrophysics Data System (ADS)

    Carilli, C. L.

    2011-04-01

    I present a simple calculation of the expected mean CO brightness temperature from the large-scale distribution of galaxies during cosmic reionization. The calculation is based on the cosmic star formation rate density required to reionize, and keep ionized, the intergalactic medium, and uses standard relationships between star formation rate, IR luminosity, and CO luminosity derived for star-forming galaxies over a wide range in redshift. I find that the mean CO brightness temperature resulting from the galaxies that could reionize the universe at z = 8 is TB ~ 1.1(C/5)(f esc/0.1)-1μK, where f esc is the escape fraction of ionizing photons from the first galaxies and C is the IGM clumping factor. Intensity mapping of the CO emission from the large-scale structure of the star-forming galaxies during cosmic reionization on scales of order 102 to 103 deg2, in combination with H I 21 cm imaging of the neutral IGM, will provide a comprehensive study of the earliest epoch of galaxy formation.

  14. Pilot observations at 74 MHz for global 21cm cosmology with the Parkes 64 m

    NASA Astrophysics Data System (ADS)

    Bannister, Keith; McConnell, David; Reynolds, John; Chippendale, Aaron; Landecker, Tom L.; Dunning, Alex

    2013-10-01

    We propose a single pilot observing session using the existing 74 MHz feed at Parkes to evaluate tools and techniques to optimise low frequency (44-88 MHz) observing. 1. A continuum map of the diffuse emission in the Southern sky at 74 MHz. Such a map would be of great help to single-dipole 21cm cosmology experiments, whose diffuse Galactic foregrounds are currently poorly constrained (Pritchard & Loeb, 2010b; de Oliveira-Costa et al., 2008). 2. A wideband (44-88 MHz) map of of the Southern sky, which can be used as a direct detection of the dark ages global signal. Recent theoretical work has shown that the Parkes aperture of 64 m is the optimal size for such a direct detection, which could be achieved at 25? in as little as 100 hrs of observing (Liu et al., 2012). After receiving a 4.1 grade in the previous round, our observations were not scheduled due to limited receiver changes. We are therefore re-proposing as formality. Since the proposal, we have obtained RFI measurements with the feed pointed at zenith. We are confident the dominant source of RFI can be found and removed. If observing at this band is possible, at least two scientific outputs relevant to global 21cm cosmology (among many others) are put within reach:

  15. HI Intensity Mapping with FAST

    NASA Astrophysics Data System (ADS)

    Bigot-Sazy, M.-A.; Ma, Y.-Z.; Battye, R. A.; Browne, I. W. A.; Chen, T.; Dickinson, C.; Harper, S.; Maffei, B.; Olivari, L. C.; Wilkinsondagger, P. N.

    2016-02-01

    We discuss the detectability of large-scale HI intensity fluctuations using the FAST telescope. We present forecasts for the accuracy of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. The FAST 19-beam L-band receivers (1.05-1.45 GHz) can provide constraints on the matter power spectrum and dark energy equation of state parameters (w0,wa) that are comparable to the BINGO and CHIME experiments. For one year of integration time we find that the optimal survey area is 6000 deg2. However, observing with larger frequency coverage at higher redshift (0.95-1.35 GHz) improves the projected errorbars on the HI power spectrum by more than 2 σ confidence level. The combined constraints from FAST, CHIME, BINGO and Planck CMB observations can provide reliable, stringent constraints on the dark energy equation of state.

  16. Intensity Mapping of Lyα Emission during the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Silva, Marta B.; Santos, Mario G.; Gong, Yan; Cooray, Asantha; Bock, James

    2013-02-01

    We calculate the absolute intensity and anisotropies of the Lyα radiation field present during the epoch of reionization. We consider emission from both galaxies and the intergalactic medium (IGM) and take into account the main contributions to the production of Lyα photons: recombinations, collisions, continuum emission from galaxies, and scattering of Lyn photons in the IGM. We find that the emission from individual galaxies dominates over the IGM with a total Lyα intensity (times frequency) of about (1.43-3.57) × 10-8 erg s-1 cm-2 sr-1 at a redshift of 7. This intensity level is low, so it is unlikely that the Lyα background during reionization can be established by an experiment aiming at an absolute background light measurement. Instead, we consider Lyα intensity mapping with the aim of measuring the anisotropy power spectrum that has rms fluctuations at the level of 1 × 10-16 [erg s-1 cm-2 sr-1]2 at a few Mpc scales. These anisotropies could be measured with a spectrometer at near-IR wavelengths from 0.9 to 1.4 μm with fields in the order of 0.5 to 1 deg2. We recommend that existing ground-based programs using narrowband filters also pursue intensity fluctuations to study statistics on the spatial distribution of faint Lyα emitters. We also discuss the cross-correlation signal with 21 cm experiments that probe H I in the IGM during reionization. A dedicated sub-orbital or space-based Lyα intensity mapping experiment could provide a viable complimentary approach to probe reionization, when compared to 21 cm experiments, and is likely within experimental reach.

  17. Distinctive rings in the 21 cm signal of the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Vonlanthen, P.; Semelin, B.; Baek, S.; Revaz, Y.

    2011-08-01

    Context. It is predicted that sources emitting UV radiation in the Lyman band during the epoch of reionization show a series of discontinuities in their Lyα flux radial profile as a consequence of the thickness of the Lyman-series lines in the primeval intergalactic medium. Through unsaturated Wouthuysen-Field coupling, these spherical discontinuities are also present in the 21 cm emission of the neutral IGM. Aims: We study the effects that these discontinuities have on the differential brightness temperature of the 21 cm signal of neutral hydrogen in a realistic setting that includes all other sources of fluctuations. We focus on the early phases of the epoch of reionization, and we address the question of the detectability by the planned Square Kilometre Array (SKA). Such a detection would be of great interest because these structures could provide an unambiguous diagnostic tool for the cosmological origin of the signal that remains after the foreground cleaning procedure. These structures could also be used as a new type of standard rulers. Methods: We determine the differential brightness temperature of the 21 cm signal in the presence of inhomogeneous Wouthuysen-Field effect using simulations that include (hydro)dynamics as well as ionizing and Lyman lines 3D radiative transfer with the code LICORICE. We include radiative transfer for the higher-order Lyman-series lines and consider also the effect of backreaction from recoils and spin diffusivity on the Lyα resonance. Results: We find that the Lyman horizons are difficult to indentify using the power spectrum of the 21 cm signal but are clearly visible in the maps and radial profiles around the first sources of our simulations, if only for a limited time interval, typically Δz ≈ 2 at z ~ 13. Stacking the profiles of the different sources of the simulation at a given redshift results in extending this interval to Δz ≈ 4. When we take into account the implementation and design planned for the SKA

  18. 21 cm Fluctuations of the Cosmic Dawn with the Owens Valley Long Wavelength Array

    NASA Astrophysics Data System (ADS)

    Eastwood, Michael; Hallinan, Gregg; Owens Valley LWA Collaboration

    2016-01-01

    The Owens Valley Long Wavelength Array (OVRO LWA) is a 288-antenna interferometer covering 30 to 80 MHz located at the Owens Valley Radio Observatory (OVRO) near Big Pine, California. I am leading the effort to detect spatial fluctuations of the 21 cm transition from the cosmic dawn (z~20) with the OVRO LWA. These spatial fluctuations are primarily sourced by inhomogeneous X-ray heating from early star formation. The spectral hardness of early X-ray sources, stellar feedback mechanisms, and baryon streaming therefore all play a role in shaping the power spectrum. I will present the application of m-mode analysis (Shaw et al. 2014, Shaw et al. 2015) to OVRO LWA data to: 1. compress the data set, 2. create maps of the northern sky that can be fed back into the calibration pipeline, and 3. filter foreground emission. Finally I will present the current status and future prospects of the OVRO LWA for detecting the 21 cm power spectrum at z~20.

  19. Tracing the Milky Way Nuclear Wind with 21cm Atomic Hydrogen Emission

    NASA Astrophysics Data System (ADS)

    Lockman, Felix J.; McClure-Griffiths, N. M.

    2016-08-01

    There is evidence in 21 cm H i emission for voids several kiloparsecs in size centered approximately on the Galactic center, both above and below the Galactic plane. These appear to map the boundaries of the Galactic nuclear wind. An analysis of H i at the tangent points, where the distance to the gas can be estimated with reasonable accuracy, shows a sharp transition at Galactic radii R ≲ 2.4 kpc from the extended neutral gas layer characteristic of much of the Galactic disk, to a thin Gaussian layer with FWHM ˜ 125 pc. An anti-correlation between H i and γ-ray emission at latitudes 10^\\circ ≤slant | b| ≤slant 20^\\circ suggests that the boundary of the extended H i layer marks the walls of the Fermi Bubbles. With H i, we are able to trace the edges of the voids from | z| \\gt 2 {{kpc}} down to z ≈ 0, where they have a radius ˜2 kpc. The extended Hi layer likely results from star formation in the disk, which is limited largely to R ≳ 3 kpc, so the wind may be expanding into an area of relatively little H i. Because the H i kinematics can discriminate between gas in the Galactic center and foreground material, 21 cm H i emission may be the best probe of the extent of the nuclear wind near the Galactic plane.

  20. Dicke’s Superradiance in Astrophysics. I. The 21 cm Line

    NASA Astrophysics Data System (ADS)

    Rajabi, Fereshteh; Houde, Martin

    2016-08-01

    We have applied the concept of superradiance introduced by Dicke in 1954 to astrophysics by extending the corresponding analysis to the magnetic dipole interaction characterizing the atomic hydrogen 21 cm line. Although it is unlikely that superradiance could take place in thermally relaxed regions and that the lack of observational evidence of masers for this transition reduces the probability of detecting superradiance, in situations where the conditions necessary for superradiance are met (close atomic spacing, high velocity coherence, population inversion, and long dephasing timescales compared to those related to coherent behavior), our results suggest that relatively low levels of population inversion over short astronomical length-scales (e.g., as compared to those required for maser amplification) can lead to the cooperative behavior required for superradiance in the interstellar medium. Given the results of our analysis, we expect the observational properties of 21 cm superradiance to be characterized by the emission of high-intensity, spatially compact, burst-like features potentially taking place over short periods ranging from minutes to days.

  1. THE IMPACT OF THE SUPERSONIC BARYON-DARK MATTER VELOCITY DIFFERENCE ON THE z {approx} 20 21 cm BACKGROUND

    SciTech Connect

    McQuinn, Matthew; O'Leary, Ryan M.

    2012-11-20

    direct probe of the first stars and black holes. In addition, we show that structure formation shocks are unable to heat the universe sufficiently to erase a strong 21 cm absorption trough at z {approx} 20 that is found in most models of the sky-averaged 21 cm intensity.

  2. VizieR Online Data Catalog: Galactic 21-cm line Survey (Westerhout+ 1982)

    NASA Astrophysics Data System (ADS)

    Westerhout, G.; Wendlandt, H. U.

    1997-06-01

    This catalog presents a completely sampled survey of 21-cm line profiles extending from Galactic longitude 11 to 235 degrees and nominally covering a range between latitude +2 and -2 degrees. It was observed in 1971-72 with the newly resurfaced 92-m (300-foot) telescope of the National Radio Astronomy Observatory (NRAO) in Green Bank, W.V. The spatial resolution is 0.22 degrees (13 arcmin), and the velocity resolution is 2 km/s. The line profiles have 260 values of Brightness Temperature at 1 km/s intervals, and at spacings of 0.1 degree in longitude and 0.1 degree in latitude. The r.m.s per data point varies from 1.0K at low declinations to 0.5K at high declinations. The maps at constant Galactic longitude published in the printed version are at intervals of 0.2 degrees in longitude; the spectra in this catalog are therefore spatially twice as dense. The catalog is arranged in two forms: as FITS data cubes, and as FITS binary tables. The cubes contain three-dimensional arrays of spectra for various longitude ranges. The binary tables contain spectra at constant Galactic latitude, along with coordinate and velocity information necessary for the interpretation of individual spectra. A detailed description of the telescope beam characteristics and the derivation of the temperature scale is given by Westerhout, G., Mader, G.L., and Harten, R.H. (Astron. and Astrophys. Suppl. 49, 137-141, 1982). Temperature scales of this and several other 21-cm line surveys were compared by Harten, R.H., Westerhout, G., and Kerr, F.J. (Astron.J. 80, 307-310, 1975) and found to agree to within 5 %. (1 data file).

  3. The Murchison Widefield Array 21cm Epoch of Reionization Experiment: Design, Construction, and First Season Results

    NASA Astrophysics Data System (ADS)

    Beardsley, Adam

    The Cosmic Dark Ages and the Epoch of Reionization (EoR) remain largely unexplored chapters in the history and evolution of the Universe. These periods hold the potential to inform our picture of the cosmos similar to what the Cosmic Microwave Background has done over the past several decades. A promising method to probe the neutral hydrogen gas between early galaxies is known as 21cm tomography, which utilizes the ubiquitous hyper-fine transition of HI to create 3D maps of the intergalactic medium. The Murchison Widefield Array (MWA) is an instrument built with a primary science driver to detect and characterize the EoR through 21cm tomography. In this thesis we explore the challenges faced by the MWA from the layout of antennas, to a custom analysis pipeline, to bridging the gap with probes at other wavelengths. We discuss many lessons learned in the course of reducing MWA data with an extremely precise measurement in mind, and conclude with the first deep integration from array. We present a 2-σ upper limit on the EoR power spectrum of Δ^2(k)<1.25×10^4 mK^2 at cosmic scale k=0.236 h Mpc^{-1} and redshift z=6.8. Our result is a marginal improvement over previous MWA results and consistent with the best published limits from other instruments. This result is the deepest imaging power spectrum to date, and is a major step forward for this type of analysis. While our limit is dominated by systematics, we offer strategies for improvement for future analysis.

  4. The Murchison Widefield Array 21cm Epoch of Reionization Experiment: Design, Construction, and First Season Results

    NASA Astrophysics Data System (ADS)

    Beardsley, Adam

    The Cosmic Dark Ages and the Epoch of Reionization (EoR) remain largely unexplored chapters in the history and evolution of the Universe. These periods hold the potential to inform our picture of the cosmos similar to what the Cosmic Microwave Background has done over the past several decades. A promising method to probe the neutral hydrogen gas between early galaxies is known as 21cm tomography, which utilizes the ubiquitous hyper-fine transition of HI to create 3D maps of the intergalactic medium. The Murchison Widefield Array (MWA) is an instrument built with a primary science driver to detect and characterize the EoR through 21cm tomography. In this thesis we explore the challenges faced by the MWA from the layout of antennas, to a custom analysis pipeline, to bridging the gap with probes at other wavelengths. We discuss many lessons learned in the course of reducing MWA data with an extremely precise measurement in mind, and conclude with the first deep integration from array. We present a 2-sigma upper limit on the EoR power spectrum of Delta2(k) < 1.25 x 104 mK2 at cosmic scale k = 0.236 h Mpc-1 and redshift z = 6.8. Our result is a marginal improvement over previous MWA results and consistent with the best published limits from other instruments. This result is the deepest imaging power spectrum to date, and is a major step forward for this type of analysis. While our limit is dominated by systematics, we offer strategies for improvement for future analysis.

  5. Modeling the neutral hydrogen distribution in the post-reionization Universe: intensity mapping

    SciTech Connect

    Villaescusa-Navarro, Francisco; Viel, Matteo; Datta, Kanan K.; Choudhury, T. Roy E-mail: viel@oats.inaf.it E-mail: tirth@ncra.tifr.res.in

    2014-09-01

    We model the distribution of neutral hydrogen (HI) in the post-reionization era and investigate its detectability in 21 cm intensity mapping with future radio telescopes like the Square Kilometer array (SKA). We rely on high resolution hydrodynamical N-body simulations that have a state-of-the-art treatment of the low density photoionized gas in the inter-galactic medium (IGM). The HI is assigned a-posteriori to the gas particles following two different approaches: a halo-based method in which HI is assigned only to gas particles residing within dark matter halos; a particle-based method that assigns HI to all gas particles using a prescription based on the physical properties of the particles. The HI statistical properties are then compared to the observational properties of Damped Lyman-α Absorbers (DLAs) and of lower column density systems and reasonable good agreement is found for all the cases. Among the halo-based method, we further consider two different schemes that aim at reproducing the observed properties of DLAs by distributing HI inside halos: one of this results in a much higher bias for DLAs, in agreement with recent observations, which boosts the 21 cm power spectrum by a factor ∼ 4 with respect to the other recipe. Furthermore, we quantify the contribution of HI in the diffuse IGM to both Ω{sub HI} and the HI power spectrum finding to be subdominant in both cases. We compute the 21 cm power spectrum from the simulated HI distribution and calculate the expected signal for both SKA1-mid and SKA1-low configurations at 2.4 ≤ z ≤ 4. We find that SKA will be able to detect the 21 cm power spectrum, in the non-linear regime, up to k ∼ 1 h/Mpc for SKA1-mid and k ∼ 5 h/Mpc for SKA1-low with 100 hours of observations. We also investigate the perspective of imaging the HI distribution. Our findings indicate that SKA1-low could detect the most massive HI peaks with a signal to noise ratio (SNR) higher than 5 for an observation time of about 1000

  6. Optical mapping at increased illumination intensities

    NASA Astrophysics Data System (ADS)

    Kanaporis, Giedrius; Martišienė, Irma; Jurevičius, Jonas; Vosyliūtė, Rūta; Navalinskas, Antanas; Treinys, Rimantas; Matiukas, Arvydas; Pertsov, Arkady M.

    2012-09-01

    Voltage-sensitive fluorescent dyes have become a major tool in cardiac and neuro-electrophysiology. Achieving high signal-to-noise ratios requires increased illumination intensities, which may cause photobleaching and phototoxicity. The optimal range of illumination intensities varies for different dyes and must be evaluated individually. We evaluate two dyes: di-4-ANBDQBS (excitation 660 nm) and di-4-ANEPPS (excitation 532 nm) in the guinea pig heart. The light intensity varies from 0.1 to 5 mW/mm2, with the upper limit at 5 to 10 times above values reported in the literature. The duration of illumination was 60 s, which in guinea pigs corresponds to 300 beats at a normal heart rate. Within the identified duration and intensity range, neither dye shows significant photobleaching or detectable phototoxic effects. However, light absorption at higher intensities causes noticeable tissue heating, which affects the electrophysiological parameters. The most pronounced effect is a shortening of the action potential duration, which, in the case of 532-nm excitation, can reach ˜30%. At 660-nm excitation, the effect is ˜10%. These findings may have important implications for the design of optical mapping protocols in biomedical applications.

  7. High redshift signatures in the 21 cm forest due to cosmic string wakes

    SciTech Connect

    Tashiro, Hiroyuki; Sekiguchi, Toyokazu; Silk, Joseph E-mail: toyokazu.sekiguchi@nagoya-u.jp

    2014-01-01

    Cosmic strings induce minihalo formation in the early universe. The resultant minihalos cluster in string wakes and create a ''21 cm forest'' against the cosmic microwave background (CMB) spectrum. Such a 21 cm forest can contribute to angular fluctuations of redshifted 21 cm signals integrated along the line of sight. We calculate the root-mean-square amplitude of the 21 cm fluctuations due to strings and show that these fluctuations can dominate signals from minihalos due to primordial density fluctuations at high redshift (z∼>10), even if the string tension is below the current upper bound, Gμ < 1.5 × 10{sup −7}. Our results also predict that the Square Kilometre Array (SKA) can potentially detect the 21 cm fluctuations due to strings with Gμ ≈ 7.5 × 10{sup −8} for the single frequency band case and 4.0 × 10{sup −8} for the multi-frequency band case.

  8. High redshift signatures in the 21 cm forest due to cosmic string wakes

    NASA Astrophysics Data System (ADS)

    Tashiro, Hiroyuki; Sekiguchi, Toyokazu; Silk, Joseph

    2014-01-01

    Cosmic strings induce minihalo formation in the early universe. The resultant minihalos cluster in string wakes and create a ``21 cm forest'' against the cosmic microwave background (CMB) spectrum. Such a 21 cm forest can contribute to angular fluctuations of redshifted 21 cm signals integrated along the line of sight. We calculate the root-mean-square amplitude of the 21 cm fluctuations due to strings and show that these fluctuations can dominate signals from minihalos due to primordial density fluctuations at high redshift (zgtrsim10), even if the string tension is below the current upper bound, Gμ < 1.5 × 10-7. Our results also predict that the Square Kilometre Array (SKA) can potentially detect the 21 cm fluctuations due to strings with Gμ ≈ 7.5 × 10-8 for the single frequency band case and 4.0 × 10-8 for the multi-frequency band case.

  9. Probing reionization with the cross-power spectrum of 21 cm and near-infrared radiation backgrounds

    SciTech Connect

    Mao, Xiao-Chun

    2014-08-01

    The cross-correlation between the 21 cm emission from the high-redshift intergalactic medium and the near-infrared (NIR) background light from high-redshift galaxies promises to be a powerful probe of cosmic reionization. In this paper, we investigate the cross-power spectrum during the epoch of reionization. We employ an improved halo approach to derive the distribution of the density field and consider two stellar populations in the star formation model: metal-free stars and metal-poor stars. The reionization history is further generated to be consistent with the electron-scattering optical depth from cosmic microwave background measurements. Then, the intensity of the NIR background is estimated by collecting emission from stars in first-light galaxies. On large scales, we find that the 21 cm and NIR radiation backgrounds are positively correlated during the very early stages of reionization. However, these two radiation backgrounds quickly become anti-correlated as reionization proceeds. The maximum absolute value of the cross-power spectrum is |Δ{sub 21,NIR}{sup 2}|∼10{sup −4} mK nW m{sup –2} sr{sup –1}, reached at ℓ ∼ 1000 when the mean fraction of ionized hydrogen is x-bar{sub i}∼0.9. We find that Square Kilometer Array can measure the 21 cm-NIR cross-power spectrum in conjunction with mild extensions to the existing CIBER survey, provided that the integration time independently adds up to 1000 and 1 hr for 21 cm and NIR observations, and that the sky coverage fraction of the CIBER survey is extended from 4 × 10{sup –4} to 0.1. Measuring the cross-correlation signal as a function of redshift provides valuable information on reionization and helps confirm the origin of the 'missing' NIR background.

  10. Measuring the X-ray background in the reionization era with first generation 21 cm experiments

    SciTech Connect

    Christian, Pierre; Loeb, Abraham E-mail: aloeb@cfa.harvard.edu

    2013-09-01

    The X-ray background during the epoch of reionization is currently poorly constrained. We demonstrate that it is possible to use first generation 21 cm experiments to calibrate it. Using the semi-numerical simulation, 21cmFAST, we calculate the dependence of the 21 cm power spectrum on the X-ray background flux. Comparing the signal to the sensitivity of the Murchison Widefield Array (MWA) we find that in the redshift interval z =8-14 the 21 cm signal is detectable for certain values of the X-ray background. We show that there is no degeneracy between the X-ray production efficiency and the Lyα production efficiency and that the degeneracy with the ionization fraction of the intergalactic medium can be broken.

  11. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Astrophysics Data System (ADS)

    Switzer, Eric R.; Liu, Adrian

    2014-10-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z ~ 6-30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line of sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least ~10-4. In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the ~10-2 level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission.

  12. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Technical Reports Server (NTRS)

    Switzer, Eric R.; Liu, Adrian

    2014-01-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z approx. 6 - 30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal, and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line-of-sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least approx. 10(exp -4). In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the approx. 10(exp -2) level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission. Subject headings: dark ages, reionization, first stars - methods: data analysis - methods: statistical

  13. Erasing the variable: empirical foreground discovery for global 21 cm spectrum experiments

    SciTech Connect

    Switzer, Eric R.; Liu, Adrian

    2014-10-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z ∼ 6-30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line of sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least ∼10{sup –4}. In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the ∼10{sup –2} level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission.

  14. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    PubMed

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  15. A correlation between the H I 21-cm absorption strength and impact parameter in external galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Reeves, S. N.; Allison, J. R.; Sadler, E. M.

    2016-07-01

    By combining the data from surveys for H I 21-cm absorption at various impact parameters in near-by galaxies, we report an anti-correlation between the 21-cm absorption strength (velocity integrated optical depth) and the impact parameter. Also, by combining the 21-cm absorption strength with that of the emission, giving the neutral hydrogen column density, N_{H I}, we find no evidence that the spin temperature of the gas (degenerate with the covering factor) varies significantly across the disc. This is consistent with the uniformity of spin temperature measured across the Galactic disc. Furthermore, comparison with the Galactic N_{H I} distribution suggests that intervening 21-cm absorption preferentially arises in discs of high inclinations (near face-on). We also investigate the hypothesis that 21-cm absorption is favourably detected towards compact radio sources. Although there is insufficient data to determine whether there is a higher detection rate towards quasar, rather than radio galaxy, sight-lines, the 21-cm detections intervene objects with a mean turnover frequency of < ν _{_TO}rangle ≈ 5× 108 Hz, compared to < ν _{_TO}rangle ≈ 1× 108 Hz for the non-detections. Since the turnover frequency is anti-correlated with radio source size, this does indicate a preferential bias for detection towards compact background radio sources.

  16. Blind foreground subtraction for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Alonso, David; Bull, Philip; Ferreira, Pedro G.; Santos, Mário G.

    2015-02-01

    We make use of a large set of fast simulations of an intensity mapping experiment with characteristics similar to those expected of the Square Kilometre Array in order to study the viability and limits of blind foreground subtraction techniques. In particular, we consider three different approaches: polynomial fitting, principal component analysis (PCA) and independent component analysis (ICA). We review the motivations and algorithms for the three methods, and show that they can all be described, using the same mathematical framework, as different approaches to the blind source separation problem. We study the efficiency of foreground subtraction both in the angular and radial (frequency) directions, as well as the dependence of this efficiency on different instrumental and modelling parameters. For well-behaved foregrounds and instrumental effects, we find that foreground subtraction can be successful to a reasonable level on most scales of interest. We also quantify the effect that the cleaning has on the recovered signal and power spectra. Interestingly, we find that the three methods yield quantitatively similar results, with PCA and ICA being almost equivalent.

  17. Unveiling the nature of dark matter with high redshift 21 cm line experiments

    SciTech Connect

    Evoli, C.; Mesinger, A.; Ferrara, A. E-mail: andrei.mesinger@sns.it

    2014-11-01

    Observations of the redshifted 21 cm line from neutral hydrogen will open a new window on the early Universe. By influencing the thermal and ionization history of the intergalactic medium (IGM), annihilating dark matter (DM) can leave a detectable imprint in the 21 cm signal. Building on the publicly available 21cmFAST code, we compute the 21 cm signal for a 10 GeV WIMP DM candidate. The most pronounced role of DM annihilations is in heating the IGM earlier and more uniformly than astrophysical sources of X-rays. This leaves several unambiguous, qualitative signatures in the redshift evolution of the large-scale (k ≅ 0.1 Mpc{sup -1}) 21 cm power amplitude: (i) the local maximum (peak) associated with IGM heating can be lower than the other maxima; (ii) the heating peak can occur while the IGM is in emission against the cosmic microwave background (CMB); (iii) there can be a dramatic drop in power (a global minimum) corresponding to the epoch when the IGM temperature is comparable to the CMB temperature. These signatures are robust to astrophysical uncertainties, and will be easily detectable with second generation interferometers. We also briefly show that decaying warm dark matter has a negligible role in heating the IGM.

  18. 21-cm Observations with the Morehead Radio Telescope: Involving Undergraduates in Observing Programs

    NASA Astrophysics Data System (ADS)

    Malphrus, B. K.; Combs, M. S.; Kruth, J.

    2000-12-01

    Herein we report astronomical observations made by undergraduate students with the Morehead Radio Telescope (MRT). The MRT, located at Morehead State University, Morehead, Kentucky, is small aperture (44-ft.) instrument designed by faculty, students, and industrial partners to provide a research instrument and active laboratory for undergraduate astronomy, physics, pre-engineering, and computer science students. Small aperture telescopes like the MRT have numerous advantages as active laboratories and as research instruments. The benefits to students are based upon a hands-on approach to learning concepts in astrophysics and engineering. Students are provided design and research challenges and are allowed to pursue their own solutions. Problem-solving abilities and research design skills are cultivated by this approach. Additionally, there are still contributions that small aperture centimeter-wave instruments can make. The MRT operates over a 6 MHz bandwidth centered at 1420 MHz (21-cm), which corresponds to the hyperfine transition of atomic hydrogen (HI). The HI spatial distribution and flux density associated with cosmic phenomena can be observed and mapped. The dynamics and kinematics of celestial objects can be investigated by observing over a range of frequencies (up to 2.5 MHz) with a 2048-channel back-end spectrometer, providing up to 1 KHz frequency resolution. The sensitivity and versatility of the telescope design facilitate investigation of a wide variety of cosmic phenomena, including supernova remnants, emission and planetary nebulae, extended HI emission from the Milky Way, quasars, radio galaxies, and the sun. Student observations of galactic sources herein reported include Taurus A, Cygnus X, and the Rosette Nebula. Additionally, we report observations of extragalactic phenomena, including Cygnus A, 3C 147, and 3C 146. These observations serve as a performance and capability test-bed of the MRT. In addition to the astronomical results of these

  19. 21-cm radiation: a new probe of variation in the fine-structure constant.

    PubMed

    Khatri, Rishi; Wandelt, Benjamin D

    2007-03-16

    We investigate the effect of variation in the value of the fine-structure constant (alpha) at high redshifts (recombination > z > 30) on the absorption of the cosmic microwave background (CMB) at 21 cm hyperfine transition of the neutral atomic hydrogen. We find that the 21 cm signal is very sensitive to the variations in alpha and it is so far the only probe of the fine-structure constant in this redshift range. A change in the value of alpha by 1% changes the mean brightness temperature decrement of the CMB due to 21 cm absorption by >5% over the redshift range z < 50. There is an effect of similar magnitude on the amplitude of the fluctuations in the brightness temperature. The redshift of maximum absorption also changes by approximately 5%.

  20. Constraining the redshifted 21-cm signal with the unresolved soft X-ray background

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Cohen, Aviad; Barkana, Rennan; Silk, Joseph

    2016-10-01

    We use the observed unresolved cosmic X-ray background (CXRB) in the 0.5 - 2 keV band and existing upper limits on the 21-cm power spectrum to constrain the high-redshift population of X-ray sources, focusing on their effect on the thermal history of the Universe and the cosmic 21-cm signal. Because the properties of these sources are poorly constrained, we consider hot gas, X-ray binaries and mini-quasars (i.e., sources with soft or hard X-ray spectra) as possible candidates. We find that (1) the soft-band CXRB sets an upper limit on the X-ray efficiency of sources that existed before the end of reionization, which is one-to-two orders of magnitude higher than typically assumed efficiencies, (2) hard sources are more effective in generating the CXRB than the soft ones, (3) the commonly-assumed limit of saturated heating is not valid during the first half of reionization in the case of hard sources, with any allowed value of X-ray efficiency, (4) the maximal allowed X-ray efficiency sets a lower limit on the depth of the absorption trough in the global 21-cm signal and an upper limit on the height of the emission peak, while in the 21-cm power spectrum it sets a minimum amplitude and frequency for the high-redshift peaks, and (5) the existing upper limit on the 21-cm power spectrum sets a lower limit on the X-ray efficiency for each model. When combined with the 21-cm global signal, the CXRB will be useful for breaking degeneracies and helping constrain the nature of high-redshift heating sources.

  1. From Darkness to Light: Signatures of the Universe's First Galaxies in the Cosmic 21-cm Background

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    Within the first billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this Epoch of Reionization -- the emergence of the first stars, black holes, and full-fledged galaxies -- are expected to manifest as spectral "turning points" in the sky-averaged ("global") 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) required to model the signal. In this thesis, I make the first attempt to build the final piece of a global 21-cm data analysis pipeline: an inference tool capable of extracting the properties of the IGM and the Universe's first galaxies from the recovered signal. Such a framework is valuable even prior to a detection of the global 21-cm signal as it enables end-to-end simulations of 21-cm observations that can be used to optimize the design of upcoming instruments, their observing strategies, and their signal extraction algorithms. En route to a complete pipeline, I found that (1) robust limits on the physical properties of the IGM, such as its temperature and ionization state, can be derived analytically from the 21-cm turning points within two-zone models for the IGM, (2) improved constraints on the IGM properties can be obtained through simultaneous fitting of the global 21-cm signal and foregrounds, though biases can emerge depending on the parameterized form of the signal one adopts, (3) a simple four-parameter galaxy formation model can be constrained in only 100 hours of integration provided a stable instrumental response over a broad frequency range (~80 MHz), and (4) frequency-dependent RT solutions in physical models for the global 21-cm signal will be required to properly interpret the 21-cm absorption minimum, as the IGM thermal history is highly sensitive to the

  2. Canadian Hydrogen Intensity Mapping Experiment (CHIME) pathfinder

    NASA Astrophysics Data System (ADS)

    Bandura, Kevin; Addison, Graeme E.; Amiri, Mandana; Bond, J. Richard; Campbell-Wilson, Duncan; Connor, Liam; Cliche, Jean-François; Davis, Greg; Deng, Meiling; Denman, Nolan; Dobbs, Matt; Fandino, Mateus; Gibbs, Kenneth; Gilbert, Adam; Halpern, Mark; Hanna, David; Hincks, Adam D.; Hinshaw, Gary; Höfer, Carolin; Klages, Peter; Landecker, Tom L.; Masui, Kiyoshi; Mena Parra, Juan; Newburgh, Laura B.; Pen, Ue-li; Peterson, Jeffrey B.; Recnik, Andre; Shaw, J. Richard; Sigurdson, Kris; Sitwell, Mike; Smecher, Graeme; Smegal, Rick; Vanderlinde, Keith; Wiebe, Don

    2014-07-01

    A pathfinder version of CHIME (the Canadian Hydrogen Intensity Mapping Experiment) is currently being commissioned at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, BC. The instrument is a hybrid cylindrical interferometer designed to measure the large scale neutral hydrogen power spectrum across the redshift range 0.8 to 2.5. The power spectrum will be used to measure the baryon acoustic oscillation (BAO) scale across this poorly probed redshift range where dark energy becomes a significant contributor to the evolution of the Universe. The instrument revives the cylinder design in radio astronomy with a wide field survey as a primary goal. Modern low-noise amplifiers and digital processing remove the necessity for the analog beam forming that characterized previous designs. The Pathfinder consists of two cylinders 37m long by 20m wide oriented north-south for a total collecting area of 1,500 square meters. The cylinders are stationary with no moving parts, and form a transit instrument with an instantaneous field of view of ~100 degrees by 1-2 degrees. Each CHIME Pathfinder cylinder has a feedline with 64 dual polarization feeds placed every ~30 cm which Nyquist sample the north-south sky over much of the frequency band. The signals from each dual-polarization feed are independently amplified, filtered to 400-800 MHz, and directly sampled at 800 MSps using 8 bits. The correlator is an FX design, where the Fourier transform channelization is performed in FPGAs, which are interfaced to a set of GPUs that compute the correlation matrix. The CHIME Pathfinder is a 1/10th scale prototype version of CHIME and is designed to detect the BAO feature and constrain the distance-redshift relation. The lessons learned from its implementation will be used to inform and improve the final CHIME design.

  3. The Importance of Wide-field Foreground Removal for 21 cm Cosmology: A Demonstration with Early MWA Epoch of Reionization Observations

    NASA Astrophysics Data System (ADS)

    Pober, J. C.; Hazelton, B. J.; Beardsley, A. P.; Barry, N. A.; Martinot, Z. E.; Sullivan, I. S.; Morales, M. F.; Bell, M. E.; Bernardi, G.; Bhat, N. D. R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Deshpande, A. A.; Dillon, Joshua. S.; Emrich, D.; Ewall-Wice, A. M.; Feng, L.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, Han-Seek; Kittiwisit, P.; Kratzenberg, E.; Kudryavtseva, N.; Lenc, E.; Line, J.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, Sourabh; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Sethi, Shiv K.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wyithe, J. S. B.

    2016-03-01

    In this paper we present observations, simulations, and analysis demonstrating the direct connection between the location of foreground emission on the sky and its location in cosmological power spectra from interferometric redshifted 21 cm experiments. We begin with a heuristic formalism for understanding the mapping of sky coordinates into the cylindrically averaged power spectra measurements used by 21 cm experiments, with a focus on the effects of the instrument beam response and the associated sidelobes. We then demonstrate this mapping by analyzing power spectra with both simulated and observed data from the Murchison Widefield Array. We find that removing a foreground model that includes sources in both the main field of view and the first sidelobes reduces the contamination in high k∥ modes by several per cent relative to a model that only includes sources in the main field of view, with the completeness of the foreground model setting the principal limitation on the amount of power removed. While small, a percent-level amount of foreground power is in itself more than enough to prevent recovery of any Epoch of Reionization signal from these modes. This result demonstrates that foreground subtraction for redshifted 21 cm experiments is truly a wide-field problem, and algorithms and simulations must extend beyond the instrument’s main field of view to potentially recover the full 21 cm power spectrum.

  4. Bayesian constraints on the global 21-cm signal from the Cosmic Dawn

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; Zwart, J. T. L.; Price, D.; Greenhill, L. J.; Mesinger, A.; Dowell, J.; Eftekhari, T.; Ellingson, S. W.; Kocz, J.; Schinzel, F.

    2016-09-01

    The birth of the first luminous sources and the ensuing epoch of reionization are best studied via the redshifted 21-cm emission line, the signature of the first two imprinting the last. In this work, we present a fully Bayesian method, HIBAYES, for extracting the faint, global (sky-averaged) 21-cm signal from the much brighter foreground emission. We show that a simplified (but plausible) Gaussian model of the 21-cm emission from the Cosmic Dawn epoch (15 ≲ z ≲ 30), parametrized by an amplitude A_{H I}, a frequency peak ν _{H I} and a width σ _{H I}, can be extracted even in the presence of a structured foreground frequency spectrum (parametrized as a seventh-order polynomial), provided sufficient signal-to-noise (400 h of observation with a single dipole). We apply our method to an early, 19-min-long observation from the Large aperture Experiment to detect the Dark Ages, constraining the 21-cm signal amplitude and width to be -890 < A_{H I} < 0 mK and σ _{H I} > 6.5 MHz (corresponding to Δz > 1.9 at redshift z ≃ 20) respectively at the 95-per cent confidence level in the range 13.2 < z < 27.4 (100 > ν > 50 MHz).

  5. Reionization on large scales. IV. Predictions for the 21 cm signal incorporating the light cone effect

    SciTech Connect

    La Plante, P.; Battaglia, N.; Natarajan, A.; Peterson, J. B.; Trac, H.; Cen, R.; Loeb, A.

    2014-07-01

    We present predictions for the 21 cm brightness temperature power spectrum during the Epoch of Reionization (EoR). We discuss the implications of the 'light cone' effect, which incorporates evolution of the neutral hydrogen fraction and 21 cm brightness temperature along the line of sight. Using a novel method calibrated against radiation-hydrodynamic simulations, we model the neutral hydrogen density field and 21 cm signal in large volumes (L = 2 Gpc h {sup –1}). The inclusion of the light cone effect leads to a relative decrease of about 50% in the 21 cm power spectrum on all scales. We also find that the effect is more prominent at the midpoint of reionization and later. The light cone effect can also introduce an anisotropy along the line of sight. By decomposing the 3D power spectrum into components perpendicular to and along the line of sight, we find that in our fiducial reionization model, there is no significant anisotropy. However, parallel modes can contribute up to 40% more power for shorter reionization scenarios. The scales on which the light cone effect is relevant are comparable to scales where one measures the baryon acoustic oscillation. We argue that due to its large comoving scale and introduction of anisotropy, the light cone effect is important when considering redshift space distortions and future application to the Alcock-Paczyński test for the determination of cosmological parameters.

  6. Galaxy-cluster masses via 21st-century measurements of lensing of 21-cm fluctuations

    NASA Astrophysics Data System (ADS)

    Kovetz, Ely D.; Kamionkowski, Marc

    2013-03-01

    We discuss the prospects to measure galaxy-cluster properties via weak lensing of 21-cm fluctuations from the dark ages and the epoch of reionization (EOR). We choose as a figure of merit the smallest cluster mass detectable through such measurements. We construct the minimum-variance quadratic estimator for the cluster mass based on lensing of 21-cm fluctuations at multiple redshifts. We discuss the tradeoff among frequency bandwidth, angular resolution, and the number of redshift shells available for a fixed noise level for the radio detectors. Observations of lensing of the 21-cm background from the dark ages will be capable of detecting M≳1012h-1M⊙ mass halos, but will require futuristic experiments to overcome the contaminating sources. Next-generation radio measurements of 21-cm fluctuations from the EOR will, however, have the sensitivity to detect galaxy clusters with halo masses M≳1013h-1M⊙, given enough observation time (for the relevant sky patch) and collecting area to maximize their resolution capabilities.

  7. Chromatic effects in the 21 cm global signal from the cosmic dawn

    NASA Astrophysics Data System (ADS)

    Vedantham, H. K.; Koopmans, L. V. E.; de Bruyn, A. G.; Wijnholds, S. J.; Ciardi, B.; Brentjens, M. A.

    2014-01-01

    The redshifted 21 cm brightness distribution from neutral hydrogen is a promising probe into the cosmic dark ages, cosmic dawn and re-ionization. Low Frequency Array's (LOFAR) Low Band Antennas (LBA) may be used in the frequency range 45 to 85 MHz (30 > z > 16) to measure the sky-averaged redshifted 21 cm brightness temperature as a function of frequency, or equivalently, cosmic redshift. These low frequencies are affected by strong Galactic foreground emission that is observed through frequency-dependent ionospheric and antenna beam distortions which lead to chromatic mixing of spatial structure into spectral structure. Using simple models, we show that (i) the additional antenna temperature due to ionospheric refraction and absorption are at an ˜1 per cent level - two-to-three orders of magnitude higher than the expected 21 cm signal, and have an approximate ν-2 dependence, (ii) ionospheric refraction leads to a knee-like modulation on the sky spectrum at ν ≈ 4 times plasma frequency. Using more realistic simulations, we show that in the measured sky spectrum, more than 50 per cent of the 21 cm signal variance can be lost to confusion from foregrounds and chromatic effects. To mitigate this confusion, we recommend modelling of chromatic effects using additional priors and interferometric visibilities rather than subtracting them as generic functions of frequency as previously proposed.

  8. Simulating the 21 cm signal from reionization including non-linear ionizations and inhomogeneous recombinations

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2016-04-01

    We explore the impact of incorporating physically motivated ionization and recombination rates on the history and topology of cosmic reionization and the resulting 21 cm power spectrum, by incorporating inputs from small-volume hydrodynamic simulations into our semi-numerical code, SIMFAST21, that evolves reionization on large scales. We employ radiative hydrodynamic simulations to parametrize the ionization rate Rion and recombination rate Rrec as functions of halo mass, overdensity and redshift. We find that Rion scales superlinearly with halo mass ({R_ion}∝ M_h^{1.41}), in contrast to previous assumptions. Implementing these scalings into SIMFAST21, we tune our one free parameter, the escape fraction fesc, to simultaneously reproduce recent observations of the Thomson optical depth, ionizing emissivity and volume-averaged neutral fraction by the end of reionization. This yields f_esc=4^{+7}_{-2} per cent averaged over our 0.375 h-1 Mpc cells, independent of halo mass or redshift, increasing to 6 per cent if we also constrain to match the observed z = 7 star formation rate function. Introducing superlinear Rion increases the duration of reionization and boosts small-scale 21 cm power by two to three times at intermediate phases of reionization, while inhomogeneous recombinations reduce ionized bubble sizes and suppress large-scale 21 cm power by two to three times. Gas clumping on sub-cell scales has a minimal effect on the 21 cm power. Superlinear Rion also significantly increases the median halo mass scale for ionizing photon output to ˜ 1010 M⊙, making the majority of reionizing sources more accessible to next-generation facilities. These results highlight the importance of accurately treating ionizing sources and recombinations for modelling reionization and its 21 cm power spectrum.

  9. The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes

    NASA Astrophysics Data System (ADS)

    Lopez-Honorez, Laura; Mena, Olga; Moliné, Ángeles; Palomares-Ruiz, Sergio; Vincent, Aaron C.

    2016-08-01

    Future dedicated radio interferometers, including HERA and SKA, are very promising tools that aim to study the epoch of reionization and beyond via measurements of the 21 cm signal from neutral hydrogen. Dark matter (DM) annihilations into charged particles change the thermal history of the Universe and, as a consequence, affect the 21 cm signal. Accurately predicting the effect of DM strongly relies on the modeling of annihilations inside halos. In this work, we use up-to-date computations of the energy deposition rates by the products from DM annihilations, a proper treatment of the contribution from DM annihilations in halos, as well as values of the annihilation cross section allowed by the most recent cosmological measurements from the Planck satellite. Given current uncertainties on the description of the astrophysical processes driving the epochs of reionization, X-ray heating and Lyman-α pumping, we find that disentangling DM signatures from purely astrophysical effects, related to early-time star formation processes or late-time galaxy X-ray emissions, will be a challenging task. We conclude that only annihilations of DM particles with masses of ~100 MeV, could leave an unambiguous imprint on the 21 cm signal and, in particular, on the 21 cm power spectrum. This is in contrast to previous, more optimistic results in the literature, which have claimed that strong signatures might also be present even for much higher DM masses. Additional measurements of the 21 cm signal at different cosmic epochs will be crucial in order to break the strong parameter degeneracies between DM annihilations and astrophysical effects and undoubtedly single out a DM imprint for masses different from ~100 MeV.

  10. The TIME-Pilot intensity mapping experiment

    NASA Astrophysics Data System (ADS)

    Crites, A. T.; Bock, J. J.; Bradford, C. M.; Chang, T. C.; Cooray, A. R.; Duband, L.; Gong, Y.; Hailey-Dunsheath, S.; Hunacek, J.; Koch, P. M.; Li, C. T.; O'Brient, R. C.; Prouve, T.; Shirokoff, E.; Silva, M. B.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2014-08-01

    TIME-Pilot is designed to make measurements from the Epoch of Reionization (EoR), when the first stars and galaxies formed and ionized the intergalactic medium. This will be done via measurements of the redshifted 157.7 um line of singly ionized carbon ([CII]). In particular, TIME-Pilot will produce the first detection of [CII] clustering fluctuations, a signal proportional to the integrated [CII] intensity, summed over all EoR galaxies. TIME-Pilot is thus sensitive to the emission from dwarf galaxies, thought to be responsible for the balance of ionizing UV photons, that will be difficult to detect individually with JWST and ALMA. A detection of [CII] clustering fluctuations would validate current theoretical estimates of the [CII] line as a new cosmological observable, opening the door for a new generation of instruments with advanced technology spectroscopic array focal planes that will map [CII] fluctuations to probe the EoR history of star formation, bubble size, and ionization state. Additionally, TIME-Pilot will produce high signal-to-noise measurements of CO clustering fluctuations, which trace the role of molecular gas in star-forming galaxies at redshifts 0 < z < 2. With its unique atmospheric noise mitigation, TIME-Pilot also significantly improves sensitivity for measuring the kinetic Sunyaev-Zel'dovich (kSZ) effect in galaxy clusters. TIME-Pilot will employ a linear array of spectrometers, each consisting of a parallel-plate diffraction grating. The spectrometer bandwidth covers 185-323 GHz to both probe the entire redshift range of interest and to include channels at the edges of the band for atmospheric noise mitigation. We illuminate the telescope with f/3 horns, which balances the desire to both couple to the sky with the best efficiency per beam, and to pack a large number of horns into the fixed field of view. Feedhorns couple radiation to the waveguide spectrometer gratings. Each spectrometer grating has 190 facets and provides resolving power

  11. High-accuracy linear and circular polarization measurements at 21 cm

    NASA Technical Reports Server (NTRS)

    De Pater, I.; Weiler, K. W.

    1982-01-01

    New high-accuracy linear and circular polarization measurements have been obtained for 27 small-diameter radio sources, using the Westerbork Synthesis Radio Telescope at 21 cm (1415 MHz). From these and other observed properties of the sources, estimates of the average internal magnetic field strengths in the sources are made by applying the uniform synchrotron emission model to the measured circular polarization and by using equipartition arguments. These two values are compared and found to be in agreement to within an order of magnitude, as was previously found by Weiler and de Pater (1980). Also, the magnetic fields estimated from circular polarization measurements at two different wavelengths (49 and 21 cm) are compared and found to be in rough agreement, but with indications of differences between variable and nonvariable sources. A comparison of the magnitudes of linear and circular polarization in sources shows no correlations.

  12. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  13. THE EFFECTS OF POLARIZED FOREGROUNDS ON 21 cm EPOCH OF REIONIZATION POWER SPECTRUM MEASUREMENTS

    SciTech Connect

    Moore, David F.; Aguirre, James E.; Parsons, Aaron R.; Pober, Jonathan C.; Jacobs, Daniel C.

    2013-06-01

    Experiments aimed at detecting highly-redshifted 21 cm emission from the epoch of reionization (EoR) are plagued by the contamination of foreground emission. A potentially important source of contaminating foregrounds may be Faraday-rotated, polarized emission, which leaks into the estimate of the intrinsically unpolarized EoR signal. While these foregrounds' intrinsic polarization may not be problematic, the spectral structure introduced by the Faraday rotation could be. To better understand and characterize these effects, we present a simulation of the polarized sky between 120 and 180 MHz. We compute a single visibility, and estimate the three-dimensional power spectrum from that visibility using the delay spectrum approach presented in Parsons et al. Using the Donald C. Backer Precision Array to Probe the Epoch of Reionization as an example instrument, we show the expected leakage into the unpolarized power spectrum to be several orders of magnitude above the expected 21 cm EoR signal.

  14. Cosmologically probing ultra-light particle dark matter using 21 cm signals

    SciTech Connect

    Kadota, Kenji; Mao, Yi; Silk, Joseph; Ichiki, Kiyomoto E-mail: mao@iap.fr E-mail: j.silk1@physics.ox.ac.uk

    2014-06-01

    There can arise ubiquitous ultra-light scalar fields in the Universe, such as the pseudo-Goldstone bosons from the spontaneous breaking of an approximate symmetry, which can make a partial contribution to the dark matter and affect the large scale structure of the Universe. While the properties of those ultra-light dark matter are heavily model dependent and can vary in a wide range, we develop a model-independent analysis to forecast the constraints on their mass and abundance using futuristic but realistic 21 cm observables as well as CMB fluctuations, including CMB lensing measurements. Avoiding the highly nonlinear regime, the 21 cm emission line spectra are most sensitive to the ultra-light dark matter with mass m ∼ 10{sup −26} eV for which the precision attainable on mass and abundance bounds can be of order of a few percent.

  15. The imprint of the cosmic supermassive black hole growth history on the 21 cm background radiation

    NASA Astrophysics Data System (ADS)

    Tanaka, Takamitsu L.; O'Leary, Ryan M.; Perna, Rosalba

    2016-01-01

    The redshifted 21 cm transition line of hydrogen tracks the thermal evolution of the neutral intergalactic medium (IGM) at `cosmic dawn', during the emergence of the first luminous astrophysical objects (˜100 Myr after the big bang) but before these objects ionized the IGM (˜400-800 Myr after the big bang). Because X-rays, in particular, are likely to be the chief energy courier for heating the IGM, measurements of the 21 cm signature can be used to infer knowledge about the first astrophysical X-ray sources. Using analytic arguments and a numerical population synthesis algorithm, we argue that the progenitors of supermassive black holes (SMBHs) should be the dominant source of hard astrophysical X-rays - and thus the primary driver of IGM heating and the 21 cm signature - at redshifts z ≳ 20, if (i) they grow readily from the remnants of Population III stars and (ii) produce X-rays in quantities comparable to what is observed from active galactic nuclei and high-mass X-ray binaries. We show that models satisfying these assumptions dominate over contributions to IGM heating from stellar populations, and cause the 21 cm brightness temperature to rise at z ≳ 20. An absence of such a signature in the forthcoming observational data would imply that SMBH formation occurred later (e.g. via so-called direct collapse scenarios), that it was not a common occurrence in early galaxies and protogalaxies, or that it produced far fewer X-rays than empirical trends at lower redshifts, either due to intrinsic dimness (radiative inefficiency) or Compton-thick obscuration close to the source.

  16. 21-cm signature of the first sources in the Universe: prospects of detection with SKA

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.

    2016-07-01

    Currently several low-frequency experiments are being planned to study the nature of the first stars using the redshifted 21-cm signal from the cosmic dawn and Epoch of Reionization. Using a one-dimensional radiative transfer code, we model the 21-cm signal pattern around the early sources for different source models, i.e. the metal-free Population III (PopIII) stars, primordial galaxies consisting of Population II (PopII) stars, mini-QSOs and high-mass X-ray binaries (HMXBs). We investigate the detectability of these sources by comparing the 21-cm visibility signal with the system noise appropriate for a telescope like the SKA1-low. Upon integrating the visibility around a typical source over all baselines and over a frequency interval of 16 MHz, we find that it will be possible to make a ˜9σ detection of the isolated sources like PopII galaxies, mini-QSOs and HMXBs at z ˜ 15 with the SKA1-low in 1000 h. The exact value of the signal-to-noise ratio (SNR) will depend on the source properties, in particular on the mass and age of the source and the escape fraction of ionizing photons. The predicted SNR decreases with increasing redshift. We provide simple scaling laws to estimate the SNR for different values of the parameters which characterize the source and the surrounding medium. We also argue that it will be possible to achieve an SNR ˜9 even in the presence of the astrophysical foregrounds by subtracting out the frequency-independent component of the observed signal. These calculations will be useful in planning 21-cm observations to detect the first sources.

  17. Cosmic reionization on computers. Mean and fluctuating redshifted 21 CM signal

    DOE PAGES

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-20

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ~ 10–15 only extends tomore » $$\\langle {\\rm{\\Delta }}{T}_{B}\\rangle \\sim -25\\,{\\rm{mK}}$$, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ~ 0.1–1h Mpc–1. Furthermore, this scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.« less

  18. Cosmic Reionization On Computers. Mean and Fluctuating Redshifted 21 cm Signal

    NASA Astrophysics Data System (ADS)

    Kaurov, Alexander A.; Gnedin, Nickolay Y.

    2016-06-01

    We explore the mean and fluctuating redshifted 21 cm signal in numerical simulations from the Cosmic Reionization On Computers project. We find that the mean signal varies between about ±25 mK. Most significantly, we find that the negative pre-reionization dip at z ˜ 10–15 only extends to < {{Δ }}{T}B> ˜ -25 {{mK}}, requiring substantially higher sensitivity from global signal experiments that operate in this redshift range (EDGES-II, LEDA, SCI-HI, and DARE) than has often been assumed previously. We also explore the role of dense substructure (filaments and embedded galaxies) in the formation of the 21 cm power spectrum. We find that by neglecting the semi-neutral substructure inside ionized bubbles, the power spectrum can be misestimated by 25%–50% at scales k ˜ 0.1–1h Mpc‑1. This scale range is of particular interest, because the upcoming 21 cm experiments (Murchison Widefield Array, Precision Array for Probing the Epoch of Reionization, Hydrogen Epoch of Reionization Array) are expected to be most sensitive within it.

  19. OPENING THE 21 cm EPOCH OF REIONIZATION WINDOW: MEASUREMENTS OF FOREGROUND ISOLATION WITH PAPER

    SciTech Connect

    Pober, Jonathan C.; Parsons, Aaron R.; Ali, Zaki; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, Dave; Dexter, Matthew; MacMahon, Dave; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Patricia J.; Manley, Jason; Walbrugh, William P.; Stefan, Irina I.

    2013-05-10

    We present new observations with the Precision Array for Probing the Epoch of Reionization with the aim of measuring the properties of foreground emission for 21 cm epoch of reionization (EoR) experiments at 150 MHz. We focus on the footprint of the foregrounds in cosmological Fourier space to understand which modes of the 21 cm power spectrum will most likely be compromised by foreground emission. These observations confirm predictions that foregrounds can be isolated to a {sup w}edge{sup -}like region of two-dimensional (k , k{sub Parallel-To })-space, creating a window for cosmological studies at higher k{sub Parallel-To} values. We also find that the emission extends past the nominal edge of this wedge due to spectral structure in the foregrounds, with this feature most prominent on the shortest baselines. Finally, we filter the data to retain only this ''unsmooth'' emission and image its specific k{sub Parallel-To} modes. The resultant images show an excess of power at the lowest modes, but no emission can be clearly localized to any one region of the sky. This image is highly suggestive that the most problematic foregrounds for 21 cm EoR studies will not be easily identifiable bright sources, but rather an aggregate of fainter emission.

  20. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  1. Challenges and opportunities in mapping land use intensity globally☆

    PubMed Central

    Kuemmerle, Tobias; Erb, Karlheinz; Meyfroidt, Patrick; Müller, Daniel; Verburg, Peter H; Estel, Stephan; Haberl, Helmut; Hostert, Patrick; Jepsen, Martin R.; Kastner, Thomas; Levers, Christian; Lindner, Marcus; Plutzar, Christoph; Verkerk, Pieter Johannes; van der Zanden, Emma H; Reenberg, Anette

    2013-01-01

    Future increases in land-based production will need to focus more on sustainably intensifying existing production systems. Unfortunately, our understanding of the global patterns of land use intensity is weak, partly because land use intensity is a complex, multidimensional term, and partly because we lack appropriate datasets to assess land use intensity across broad geographic extents. Here, we review the state of the art regarding approaches for mapping land use intensity and provide a comprehensive overview of available global-scale datasets on land use intensity. We also outline major challenges and opportunities for mapping land use intensity for cropland, grazing, and forestry systems, and identify key issues for future research. PMID:24143157

  2. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}ii. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  3. Extracting Physical Parameters for the First Galaxies from the Cosmic Dawn Global 21-cm Spectrum

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Mirocha, Jordan; harker, geraint; Tauscher, Keith; Datta, Abhirup

    2016-01-01

    The all-sky or global redshifted 21-cm HI signal is a potentially powerful probe of the first luminous objects and their environs during the transition from the Dark Ages to Cosmic Dawn (35 > z > 6). The first stars, black holes, and galaxies heat and ionize the surrounding intergalactic medium, composed mainly of neutral hydrogen, so the hyperfine 21-cm transition can be used to indirectly study these early radiation sources. The properties of these objects can be examined via the broad absorption and emission features that are expected in the spectrum. The Dark Ages Radio Explorer (DARE) is proposed to conduct these observations at low radio astronomy frequencies, 40-120 MHz, in a 125 km orbit about the Moon. The Moon occults both the Earth and the Sun as DARE makes observations above the lunar farside, thus eliminating the corrupting effects from Earth's ionosphere, radio frequency interference, and solar nanoflares. The signal is extracted from the galactic/extragalactic foreground employing Bayesian methods, including Markov Chain Monte Carlo (MCMC) techniques. Theory indicates that the 21-cm signal is well described by a model in which the evolution of various physical quantities follows a hyperbolic tangent (tanh) function of redshift. We show that this approach accurately captures degeneracies and covariances between parameters, including those related to the signal, foreground, and the instrument. Furthermore, we also demonstrate that MCMC fits will set meaningful constraints on the Ly-α, ionizing, and X-ray backgrounds along with the minimum virial temperature of the first star-forming halos.

  4. A comparative study of intervening and associated H I 21-cm absorption profiles in redshifted galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Duchesne, S. W.; Divoli, A.; Allison, J. R.

    2016-08-01

    The star-forming reservoir in the distant Universe can be detected through H I 21-cm absorption arising from either cool gas associated with a radio source or from within a galaxy intervening the sight-line to the continuum source. In order to test whether the nature of the absorber can be predicted from the profile shape, we have compiled and analysed all of the known redshifted (z ≥ 0.1) H I 21-cm absorption profiles. Although between individual spectra there is too much variation to assign a typical spectral profile, we confirm that associated absorption profiles are, on average, wider than their intervening counterparts. It is widely hypothesised that this is due to high velocity nuclear gas feeding the central engine, absent in the more quiescent intervening absorbers. Modelling the column density distribution of the mean associated and intervening spectra, we confirm that the additional low optical depth, wide dispersion component, typical of associated absorbers, arises from gas within the inner parsec. With regard to the potential of predicting the absorber type in the absence of optical spectroscopy, we have implemented machine learning techniques to the 55 associated and 43 intervening spectra, with each of the tested models giving a ≳80% accuracy in the prediction of the absorber type. Given the impracticability of follow-up optical spectroscopy of the large number of 21-cm detections expected from the next generation of large radio telescopes, this could provide a powerful new technique with which to determine the nature of the absorbing galaxy.

  5. A comparative study of intervening and associated H I 21-cm absorption profiles in redshifted galaxies

    NASA Astrophysics Data System (ADS)

    Curran, S. J.; Duchesne, S. W.; Divoli, A.; Allison, J. R.

    2016-11-01

    The star-forming reservoir in the distant Universe can be detected through H I 21-cm absorption arising from either cool gas associated with a radio source or from within a galaxy intervening the sightline to the continuum source. In order to test whether the nature of the absorber can be predicted from the profile shape, we have compiled and analysed all of the known redshifted (z ≥ 0.1) H I 21-cm absorption profiles. Although between individual spectra there is too much variation to assign a typical spectral profile, we confirm that associated absorption profiles are, on average, wider than their intervening counterparts. It is widely hypothesized that this is due to high-velocity nuclear gas feeding the central engine, absent in the more quiescent intervening absorbers. Modelling the column density distribution of the mean associated and intervening spectra, we confirm that the additional low optical depth, wide dispersion component, typical of associated absorbers, arises from gas within the inner parsec. With regard to the potential of predicting the absorber type in the absence of optical spectroscopy, we have implemented machine learning techniques to the 55 associated and 43 intervening spectra, with each of the tested models giving a ≳ 80 per cent accuracy in the prediction of the absorber type. Given the impracticability of follow-up optical spectroscopy of the large number of 21-cm detections expected from the next generation of large radio telescopes, this could provide a powerful new technique with which to determine the nature of the absorbing galaxy.

  6. On the Detection of Global 21-cm Signal from Reionization Using Interferometers

    NASA Astrophysics Data System (ADS)

    Singh, Saurabh; Subrahmanyan, Ravi; Udaya Shankar, N.; Raghunathan, A.

    2015-12-01

    Detection of the global redshifted 21-cm signal is an excellent means of deciphering the physical processes during the Dark Ages and subsequent Epoch of Reionization (EoR). However, detection of this faint monopole is challenging due to the high precision required in instrumental calibration and modeling of substantially brighter foregrounds and instrumental systematics. In particular, modeling of receiver noise with mK accuracy and its separation remains a formidable task in experiments aiming to detect the global signal using single-element spectral radiometers. Interferometers do not respond to receiver noise; therefore, here we explore the theory of the response of interferometers to global signals. In other words, we discuss the spatial coherence in the electric field arising from the monopole component of the 21-cm signal and methods for its detection using sensor arrays. We proceed by first deriving the response to uniform sky of two-element interferometers made of unit dipole and resonant loop antennas, then extend the analysis to interferometers made of one-dimensional arrays and also consider two-dimensional aperture antennas. Finally, we describe methods by which the coherence might be enhanced so that the interferometer measurements yield improved sensitivity to the monopole component. We conclude (a) that it is indeed possible to measure the global 21-cm from EoR using interferometers, (b) that a practically useful configuration is with omnidirectional antennas as interferometer elements, and (c) that the spatial coherence may be enhanced using, for example, a space beam splitter between the interferometer elements.

  7. 21CMMC: an MCMC analysis tool enabling astrophysical parameter studies of the cosmic 21 cm signal

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei

    2015-06-01

    We introduce 21 CMMC: a parallelized, Monte Carlo Markov Chain analysis tool, incorporating the epoch of reionization (EoR) seminumerical simulation 21 CMFAST. 21 CMMC estimates astrophysical parameter constraints from 21 cm EoR experiments, accommodating a variety of EoR models, as well as priors on model parameters and the reionization history. To illustrate its utility, we consider two different EoR scenarios, one with a single population of galaxies (with a mass-independent ionizing efficiency) and a second, more general model with two different, feedback-regulated populations (each with mass-dependent ionizing efficiencies). As an example, combining three observations (z = 8, 9 and 10) of the 21 cm power spectrum with a conservative noise estimate and uniform model priors, we find that interferometers with specifications like the Low Frequency Array/Hydrogen Epoch of Reionization Array (HERA)/Square Kilometre Array 1 (SKA1) can constrain common reionization parameters: the ionizing efficiency (or similarly the escape fraction), the mean free path of ionizing photons and the log of the minimum virial temperature of star-forming haloes to within 45.3/22.0/16.7, 33.5/18.4/17.8 and 6.3/3.3/2.4 per cent, ˜1σ fractional uncertainty, respectively. Instead, if we optimistically assume that we can perfectly characterize the EoR modelling uncertainties, we can improve on these constraints by up to a factor of ˜few. Similarly, the fractional uncertainty on the average neutral fraction can be constrained to within ≲ 10 per cent for HERA and SKA1. By studying the resulting impact on astrophysical constraints, 21 CMMC can be used to optimize (i) interferometer designs; (ii) foreground cleaning algorithms; (iii) observing strategies; (iv) alternative statistics characterizing the 21 cm signal; and (v) synergies with other observational programs.

  8. 21-cm lensing and the cold spot in the cosmic microwave background.

    PubMed

    Kovetz, Ely D; Kamionkowski, Marc

    2013-04-26

    An extremely large void and a cosmic texture are two possible explanations for the cold spot seen in the cosmic microwave background. We investigate how well these two hypotheses can be tested with weak lensing of 21-cm fluctuations from the epoch of reionization measured with the Square Kilometer Array. While the void explanation for the cold spot can be tested with Square Kilometer Array, given enough observation time, the texture scenario requires significantly prolonged observations, at the highest frequencies that correspond to the epoch of reionization, over the field of view containing the cold spot. PMID:23679703

  9. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  10. PAPER-64 Constraints on Reionization: The 21 cm Power Spectrum at z = 8.4

    NASA Astrophysics Data System (ADS)

    Ali, Zaki S.; Parsons, Aaron R.; Zheng, Haoxuan; Pober, Jonathan C.; Liu, Adrian; Aguirre, James E.; Bradley, Richard F.; Bernardi, Gianni; Carilli, Chris L.; Cheng, Carina; DeBoer, David R.; Dexter, Matthew R.; Grobbelaar, Jasper; Horrell, Jasper; Jacobs, Daniel C.; Klima, Pat; MacMahon, David H. E.; Maree, Matthys; Moore, David F.; Razavi, Nima; Stefan, Irina I.; Walbrugh, William P.; Walker, Andre

    2015-08-01

    In this paper, we report new limits on 21 cm emission from cosmic reionization based on a 135 day observing campaign with a 64-element deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. This work extends the work presented in Parsons et al. with more collecting area, a longer observing period, improved redundancy-based calibration, improved fringe-rate filtering, and updated power-spectral analysis using optimal quadratic estimators. The result is a new 2σ upper limit on Δ2(k) of (22.4 mK)2 in the range 0.15\\lt k\\lt 0.5h {{Mpc}}-1 at z = 8.4. This represents a three-fold improvement over the previous best upper limit. As we discuss in more depth in a forthcoming paper, this upper limit supports and extends previous evidence against extremely cold reionization scenarios. We conclude with a discussion of implications for future 21 cm reionization experiments, including the newly funded Hydrogen Epoch of Reionization Array.

  11. The 21-cm BAO signature of enriched low-mass galaxies during cosmic reionization

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan

    2016-06-01

    Studies of the formation of the first stars have established that they formed in small haloes of ˜105-106 M⊙ via molecular hydrogen cooling. Since a low level of ultraviolet radiation from stars suffices to dissociate molecular hydrogen, under the usually assumed scenario this primordial mode of star formation ended by redshift z ˜ 15 and much more massive haloes came to dominate star formation. However, metal enrichment from the first stars may have allowed the smaller haloes to continue to form stars. In this Letter, we explore the possible effect of star formation in metal-rich low-mass haloes on the redshifted 21-cm signal of neutral hydrogen from z = 6 to 40. These haloes are significantly affected by the supersonic streaming velocity, with its characteristic baryon acoustic oscillation (BAO) signature. Thus, enrichment of low-mass galaxies can produce a strong signature in the 21-cm power spectrum over a wide range of redshifts, especially if star formation in the small haloes was more efficient than suggested by current simulations. We show that upcoming radio telescopes can easily distinguish among various possible scenarios.

  12. Violation of statistical isotropy and homogeneity in the 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Shiraishi, Maresuke; Muñoz, Julian B.; Kamionkowski, Marc; Raccanelli, Alvise

    2016-05-01

    Most inflationary models predict primordial perturbations to be statistically isotropic and homogeneous. Cosmic microwave background (CMB) observations, however, indicate a possible departure from statistical isotropy in the form of a dipolar power modulation at large angular scales. Alternative models of inflation, beyond the simplest single-field slow-roll models, can generate a small power asymmetry, consistent with these observations. Observations of clustering of quasars show, however, agreement with statistical isotropy at much smaller angular scales. Here, we propose to use off-diagonal components of the angular power spectrum of the 21-cm fluctuations during the dark ages to test this power asymmetry. We forecast results for the planned SKA radio array, a future radio array, and the cosmic-variance-limited case as a theoretical proof of principle. Our results show that the 21-cm line power spectrum will enable access to information at very small scales and at different redshift slices, thus improving upon the current CMB constraints by ˜2 orders of magnitude for a dipolar asymmetry and by ˜1 - 3 orders of magnitude for a quadrupolar asymmetry case.

  13. The CMBR ISW and HI 21 cm cross-correlation angular power spectrum

    SciTech Connect

    Sarkar, Tapomoy Guha; Datta, Kanan K.; Bharadwaj, Somnath E-mail: kanan@cts.iitkgp.ernet.in

    2009-08-01

    The late-time growth of large scale structures is imprinted in the CMBR anisotropy through the Integrated Sachs Wolfe (ISW) effect. This is perceived to be a very important observational probe of dark energy. Future observations of redshifted 21-cm radiation from the cosmological neutral hydrogen (HI) distribution hold the potential of probing the large scale structure over a large redshift range. We have investigated the possibility of detecting the ISW through cross-correlations between the CMBR anisotropies and redshifted 21-cm observations. Assuming that the HI traces the dark matter, we find that the ISW-HI cross-correlation angular power spectrum at an angular multipole l is proportional to the dark matter power spectrum evaluated at the comoving wave number l/r, where r is the comoving distance to the redshift from which the HI signal originated. The amplitude of the cross-correlation signal depends on parameters related to the HI distribution and the growth of cosmological perturbations. However, the cross-correlation is extremely weak as compared to the CMBR anisotropies and the predicted HI signal. Even in an ideal situation, the cross-correlation signal is smaller than the cosmic variance and a statistically significant detection is not very likely.

  14. Signatures of clumpy dark matter in the global 21 cm background signal

    SciTech Connect

    Cumberbatch, Daniel T.; Lattanzi, Massimiliano; Silk, Joseph

    2010-11-15

    We examine the extent to which the self-annihilation of supersymmetric neutralino dark matter, as well as light dark matter, influences the rate of heating, ionization, and Lyman-{alpha} pumping of interstellar hydrogen and helium and the extent to which this is manifested in the 21 cm global background signal. We fully consider the enhancements to the annihilation rate from dark matter halos and substructures within them. We find that the influence of such structures can result in significant changes in the differential brightness temperature, {delta}T{sub b}. The changes at redshifts z<25 are likely to be undetectable due to the presence of the astrophysical signal; however, in the most favorable cases, deviations in {delta}T{sub b}, relative to its value in the absence of self-annihilating dark matter, of up to {approx_equal}20 mK at z=30 can occur. Thus we conclude that, in order to exclude these models, experiments measuring the global 21 cm signal, such as EDGES and CORE, will need to reduce the systematics at 50 MHz to below 20 mK.

  15. Effects of the sources of reionization on 21-cm redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Majumdar, Suman; Jensen, Hannes; Mellema, Garrelt; Chapman, Emma; Abdalla, Filipe B.; Lee, Kai-Yan; Iliev, Ilian T.; Dixon, Keri L.; Datta, Kanan K.; Ciardi, Benedetta; Fernandez, Elizabeth R.; Jelić, Vibor; Koopmans, Léon V. E.; Zaroubi, Saleem

    2016-02-01

    The observed 21 cm signal from the epoch of reionization will be distorted along the line of sight by the peculiar velocities of matter particles. These redshift-space distortions will affect the contrast in the signal and will also make it anisotropic. This anisotropy contains information about the cross-correlation between the matter density field and the neutral hydrogen field, and could thus potentially be used to extract information about the sources of reionization. In this paper, we study a collection of simulated reionization scenarios assuming different models for the sources of reionization. We show that the 21 cm anisotropy is best measured by the quadrupole moment of the power spectrum. We find that, unless the properties of the reionization sources are extreme in some way, the quadrupole moment evolves very predictably as a function of global neutral fraction. This predictability implies that redshift-space distortions are not a very sensitive tool for distinguishing between reionization sources. However, the quadrupole moment can be used as a model-independent probe for constraining the reionization history. We show that such measurements can be done to some extent by first-generation instruments such as LOFAR, while the SKA should be able to measure the reionization history using the quadrupole moment of the power spectrum to great accuracy.

  16. Limits on variations in fundamental constants from 21-cm and ultraviolet Quasar absorption lines.

    PubMed

    Tzanavaris, P; Webb, J K; Murphy, M T; Flambaum, V V; Curran, S J

    2005-07-22

    Quasar absorption spectra at 21-cm and UV rest wavelengths are used to estimate the time variation of x [triple-bond] alpha(2)g(p)mu, where alpha is the fine structure constant, g(p) the proton g factor, and m(e)/m(p) [triple-bond] mu the electron/proton mass ratio. Over a redshift range 0.24 < or = zeta(abs) < or = 2.04, (Deltax/x)(weighted)(total) = (1.17 +/- 1.01) x 10(-5). A linear fit gives x/x = (-1.43 +/- 1.27) x 10(-15) yr(-1). Two previous results on varying alpha yield the strong limits Deltamu/mu = (2.31 +/- 1.03) x 10(-5) and Deltamu/mu=(1.29 +/- 1.01) x10(-5). Our sample, 8 x larger than any previous, provides the first direct estimate of the intrinsic 21-cm and UV velocity differences 6 km s(-1).

  17. Lyman-alpha radiative transfer during the epoch of reionization: contribution to 21-cm signal fluctuations

    NASA Astrophysics Data System (ADS)

    Semelin, B.; Combes, F.; Baek, S.

    2007-11-01

    During the epoch of reionization, Ly-α photons emitted by the first stars can couple the neutral hydrogen spin temperature to the kinetic gas temperature, providing an opportunity to observe the gas in emission or absorption in the 21-cm line. Given the bright foregrounds, it is particularly important to determine the fluctuation signature of the signal precisely, so as to be able to extract it by its correlation power. LICORICE is a Monte-Carlo radiative transfer code, coupled to the dynamics via an adaptative Tree-SPH code. We present here the Ly-α part of the implementation and validate it through three classical tests. Unlike previous works, we do not assume that P_α, the number of scatterings of Ly-α photons per atom per second, is proportional to the Ly-α background flux, but take the scatterings in the Ly-α line wings into account. The latter have the effect of steepening the radial profile of P_α around each source, and re-inforce the contrast of the fluctuations. In the particular geometry of cosmic filaments of baryonic matter, Ly-α photons are scattered out of the filament, and the large-scale structure of P_α is significantly anisotropic. This could have strong implications for the possible detection of the 21-cm signal.

  18. Cosmological signatures of tilted isocurvature perturbations: reionization and 21cm fluctuations

    SciTech Connect

    Sekiguchi, Toyokazu; Sugiyama, Naoshi; Tashiro, Hiroyuki; Silk, Joseph E-mail: hiroyuki.tashiro@asu.edu E-mail: naoshi@nagoya-u.jp

    2014-03-01

    We investigate cosmological signatures of uncorrelated isocurvature perturbations whose power spectrum is blue-tilted with spectral index 2∼21cm line fluctuations due to neutral hydrogens in minihalos. Combination of measurements of the reionization optical depth and 21cm line fluctuations will provide complementary probes of a highly blue-tilted isocurvature power spectrum.

  19. Method for Direct Measurement of Cosmic Acceleration by 21-cm Absorption Systems

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li

    2014-07-01

    So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.

  20. A new Tolman test of a cosmic distance duality relation at 21 cm.

    PubMed

    Khedekar, Satej; Chakraborti, Sayan

    2011-06-01

    Under certain general conditions in an expanding universe, the luminosity distance (d(L)) and angular diameter distance (d(A)) are connected by the Etherington relation as d(L)=d(A)(1+z)2. The Tolman test suggests the use of objects of known surface brightness, to test this relation. In this Letter, we propose the use of a redshifted 21 cm signal from disk galaxies, where neutral hydrogen (HI) masses are seen to be almost linearly correlated with surface area, to conduct a new Tolman test. We construct simulated catalogs of galaxies, with the observed size-luminosity relation and realistic redshift evolution of HI mass functions, likely to be detected with the planned Square Kilometer Array. We demonstrate that these observations may soon provide the best implementation of the Tolman test to detect any violation of the cosmic distance duality relation.

  1. Strong RFI observed in protected 21 cm band at Zurich observatory, Switzerland

    NASA Astrophysics Data System (ADS)

    Monstein, C.

    2014-03-01

    While testing a new antenna control software tool, the telescope was moved to the most western azimuth position pointing to our own building. While de-accelerating the telescope, the spectrometer showed strong broadband radio frequency interference (RFI) and two single-frequency carriers around 1412 and 1425 MHz, both of which are in the internationally protected band. After lengthy analysis it was found out, that the Webcam AXIS2000 was the source for both the broadband and single-frequency interference. Switching off the Webcam solved the problem immediately. So, for future observations of 21 cm radiation, all nearby electronics has to be switched off. Not only the Webcam but also all unused PCs, printers, networks, monitors etc.

  2. The 21 cm signature of shock heated and diffuse cosmic string wakes

    SciTech Connect

    Hernández, Oscar F.; Brandenberger, Robert H. E-mail: rhb@physics.mcgill.ca

    2012-07-01

    The analysis of the 21 cm signature of cosmic string wakes is extended in several ways. First we consider the constraints on Gμ from the absorption signal of shock heated wakes laid down much later than matter radiation equality. Secondly we analyze the signal of diffuse wake, that is those wakes in which there is a baryon overdensity but which have not shock heated. Finally we compare the size of these signals to the expected thermal noise per pixel which dominates over the background cosmic gas brightness temperature and find that the cosmic string signal will exceed the thermal noise of an individual pixel in the Square Kilometre Array for string tensions Gμ > 2.5 × 10{sup −8}.

  3. Intensity Based Seismic Hazard Map of Republic of Macedonia

    NASA Astrophysics Data System (ADS)

    Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta

    2016-04-01

    The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the

  4. First limits on the 21 cm power spectrum during the Epoch of X-ray heating

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, A.; Dillon, Joshua S.; Hewitt, J. N.; Loeb, A.; Mesinger, A.; Neben, A. R.; Offringa, A. R.; Tegmark, M.; Barry, N.; Beardsley, A. P.; Bernardi, G.; Bowman, Judd D.; Briggs, F.; Cappallo, R. J.; Carroll, P.; Corey, B. E.; de Oliveira-Costa, A.; Emrich, D.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Hurley-Walker, N.; Johnston-Hollitt, M.; Jacobs, Daniel C.; Kaplan, D. L.; Kasper, J. C.; Kim, HS; Kratzenberg, E.; Lenc, E.; Line, J.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Thyagarajan, Nithyanandan; Oberoi, D.; Ord, S. M.; Paul, S.; Pindor, B.; Pober, J. C.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Sethi, Shiv K.; Srivani, K. S.; Subrahmanyan, R.; Sullivan, I. S.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.

    2016-08-01

    We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). 3 h of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1-3 h of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of 104 mK on comoving scales k ≲ 0.5 h Mpc-1. This represents the first upper limits on the 21 cm power spectrum fluctuations at redshifts 12 ≲ z ≲ 18 but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.

  5. Using 21 cm absorption surveys to measure the average H I spin temperature in distant galaxies

    NASA Astrophysics Data System (ADS)

    Allison, J. R.; Zwaan, M. A.; Duchesne, S. W.; Curran, S. J.

    2016-10-01

    We present a statistical method for measuring the average H I spin temperature in distant galaxies using the expected detection yields from future wide-field 21 cm absorption surveys. As a demonstrative case study, we consider an all-southern-sky simulated survey of 2-h per pointing with the Australian Square Kilometre Array Pathfinder for intervening H I absorbers at intermediate cosmological redshifts between z = 0.4 and 1. For example, if such a survey yielded 1000 absorbers, we would infer a harmonic-mean spin temperature of overline{T}_spin ˜ 100 K for the population of damped Lyman α absorbers (DLAs) at these redshifts, indicating that more than 50 per cent of the neutral gas in these systems is in a cold neutral medium (CNM). Conversely, a lower yield of only 100 detections would imply overline{T}_spin ˜ 1000 K and a CNM fraction less than 10 per cent. We propose that this method can be used to provide independent verification of the spin temperature evolution reported in recent 21 cm surveys of known DLAs at high redshift and for measuring the spin temperature at intermediate redshifts below z ≈ 1.7, where the Lyman α line is inaccessible using ground-based observatories. Increasingly more sensitive and larger surveys with the Square Kilometre Array should provide stronger statistical constraints on the average spin temperature. However, these will ultimately be limited by the accuracy to which we can determine the H I column density frequency distribution, the covering factor and the redshift distribution of the background radio source population.

  6. Neutral hydrogen in galaxy clusters: impact of AGN feedback and implications for intensity mapping

    NASA Astrophysics Data System (ADS)

    Villaescusa-Navarro, Francisco; Planelles, Susana; Borgani, Stefano; Viel, Matteo; Rasia, Elena; Murante, Giuseppe; Dolag, Klaus; Steinborn, Lisa K.; Biffi, Veronica; Beck, Alexander M.; Ragone-Figueroa, Cinthia

    2016-03-01

    By means of zoom-in hydrodynamic simulations, we quantify the amount of neutral hydrogen (H I) hosted by groups and clusters of galaxies. Our simulations, which are based on an improved formulation of smoothed particle hydrodynamics, include radiative cooling, star formation, metal enrichment and supernova feedback, and can be split into two different groups, depending on whether feedback from active galactic nuclei (AGN) is turned on or off. Simulations are analysed to account for H I self-shielding and the presence of molecular hydrogen. We find that the mass in neutral hydrogen of dark matter haloes monotonically increases with the halo mass and can be well described by a power law of the form M_{H I}(M,z)∝ M^{3/4}. Our results point out that AGN feedback reduces both the total halo mass and its H I mass, although it is more efficient in removing H I. We conclude that AGN feedback reduces the neutral hydrogen mass of a given halo by ˜50 per cent, with a weak dependence on halo mass and redshift. The spatial distribution of neutral hydrogen within haloes is also affected by AGN feedback, whose effect is to decrease the fraction of H I that resides in the halo inner regions. By extrapolating our results to haloes not resolved in our simulations, we derive astrophysical implications from the measurements of Ω _{H I}(z): haloes with circular velocities larger than ˜25 km s-1 are needed to host H I in order to reproduce observations. We find that only the model with AGN feedback is capable of reproducing the value of Ω _{H I}b_{H I} derived from available 21 cm intensity mapping observations.

  7. AGB circumstellar environments probed through the 21 cm atomic hydrogen line emission. A programme for the SKA?

    NASA Astrophysics Data System (ADS)

    Gerard, E.; Le Bertre, T.

    2006-06-01

    Red giant stars are responsible for 70% of the recycling of stellar matter in the local interstellar medium (ISM) through mass loss, mainly along the AGB sequence. Most of the matter in circumstellar shells is hydrogen in atomic (or molecular form). However, up to now, atomic hydrogen has remained largely undetected due to the weakness of its emission, the merging of circumstellar matter with the ambient ISM and the confusion from foreground and background interstellar hydrogen along the same line of sight. With the upgraded Nancay Radiotelescope, we have started a new search for HI at 21 cm towards AGB stars and post-AGBs, including PNs. We illustrate our results on one case, EP~Aqr, which shows that the contamination by interstellar emission must be treated with great care and discuss the prospects with the SKA. In order to sort out the genuine circumstellar HI emission from the interstellar one, it is necessary to map large areas of the sky (at all angular scales from sub-arcsec to degrees) with high spectral resolution, high sensitivity and a large dynamical range.

  8. Tests of the Tully-Fisher relation. 1: Scatter in infrared magnitude versus 21 cm width

    NASA Technical Reports Server (NTRS)

    Bernstein, Gary M.; Guhathakurta, Puragra; Raychaudhury, Somak; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry; Vogt, Nicole P.

    1994-01-01

    We examine the precision of the Tully-Fisher relation (TFR) using a sample of galaxies in the Coma region of the sky, and find that it is good to 5% or better in measuring relative distances. Total magnitudes and disk axis ratios are derived from H and I band surface photometry, and Arecibo 21 cm profiles define the rotation speeds of the galaxies. Using 25 galaxies for which the disk inclination and 21 cm width are well defined, we find an rms deviation of 0.10 mag from a linear TFR with dI/d(log W(sub c)) = -5.6. Each galaxy is assumed to be at a distance proportional to its redshift, and an extinction correction of 1.4(1-b/a) mag is applied to the total I magnitude. The measured scatter is less than 0.15 mag using milder extinction laws from the literature. The I band TFR scatter is consistent with measurement error, and the 95% CL limits on the intrinsic scatter are 0-0.10 mag. The rms scatter using H band magnitudes is 0.20 mag (N = 17). The low width galaxies have scatter in H significantly in excess of known measurement error, but the higher width half of the galaxies have scatter consistent with measurement error. The H band TFR slope may be as steep as the I band slope. As the first applications of this tight correlation, we note the following: (1) the data for the particular spirals commonly used to define the TFR distance to the Coma cluster are inconsistent with being at a common distance and are in fact in free Hubble expansion, with an upper limit of 300 km/s on the rms peculiar line-of-sight velocity of these gas-rich spirals; and (2) the gravitational potential in the disks of these galaxies has typical ellipticity less than 5%. The published data for three nearby spiral galaxies with Cepheid distance determinations are inconsistent with our Coma TFR, suggesting that these local calibrators are either ill-measured or peculiar relative to the Coma Supercluster spirals, or that the TFR has a varying form in different locales.

  9. Testing gravity at large scales with H I intensity mapping

    NASA Astrophysics Data System (ADS)

    Pourtsidou, Alkistis

    2016-09-01

    We investigate the possibility of testing Einstein's general theory of relativity (GR) and the standard cosmological model via the EG statistic using neutral hydrogen (H I) intensity mapping. We generalize the Fourier space estimator for EG to include H I as a biased tracer of matter and forecast statistical errors using H I clustering and lensing surveys that can be performed in the near future, in combination with ongoing and forthcoming optical galaxy and cosmic microwave background (CMB) surveys. We find that fractional errors <1 per cent in the EG measurement can be achieved in a number of cases and compare the ability of various survey combinations to differentiate between GR and specific modified gravity models. Measuring EG with intensity mapping and the Square Kilometre Array can provide exquisite tests of gravity at cosmological scales.

  10. Improved foreground removal in GMRT 610 MHz observations towards redshifted 21-cm tomography

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhik; Bharadwaj, Somnath; Ali, Sk. Saiyad; Chengalur, Jayaram N.

    2011-12-01

    Foreground removal is a challenge for 21-cm tomography of the high-redshift Universe. We use archival Giant Metrewave Radio Telescope (GMRT) data (obtained for completely different astronomical goals) to estimate the foregrounds at a redshift of ˜1. The statistic we use is the cross power spectrum between two frequencies separated by Δν at the angular multipole ℓ, or equivalently the multi-frequency angular power spectrum Cℓ(Δν). An earlier measurement of Cℓ(Δν) using these data had revealed the presence of oscillatory patterns along Δν, which turned out to be a severe impediment for foreground removal. Using the same data, in this paper we show that it is possible to considerably reduce these oscillations by suppressing the sidelobe response of the primary antenna elements. The suppression works best at the angular multipoles ℓ for which there is a dense sampling of the u-v plane. For three angular multipoles ℓ= 1405, 1602 and 1876, this sidelobe suppression along with a low order polynomial fitting completely results in residuals of (≤ 0.02 mK2), consistent with the noise at the 3σ level. Since the polynomial fitting is done after estimation of the power spectrum it can be ensured that the estimation of the H I signal is not biased. The corresponding 99 per cent upper limit on the H I signal is ?, where ? is the mean neutral fraction and b is the bias.

  11. Radio frequency interference at Jodrell Bank Observatory within the protected 21 cm band.

    PubMed

    Tarter, J

    1989-01-01

    Radio frequency interference (RFI) will provide one of the most difficult challenges to systematic Searches for Extraterrestrial Intelligence (SETI) at microwave frequencies. The SETI-specific equipment is being optimized for the detection of signals generated by a technology rather than those generated by natural processes in the universe. If this equipment performs as expected, then it will inevitably detect many signals originating from terrestrial technology. If these terrestrial signals are too numerous and/or strong, the equipment will effectively be blinded to the (presumably) weaker extraterrestrial signals being sought. It is very difficult to assess how much of a problem RFI will actually represent to future observations, without employing the equipment and beginning the search. In 1983 a very high resolution spectrometer was placed at the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England. This equipment permitted an investigation of the interference environment at Jodrell Bank, at that epoch, and at frequencies within the 21 cm band. This band was chosen because it has long been "protected" by international agreement; no transmitters should have been operating at those frequencies. The data collected at Jodrell Bank were expected to serve as a "best case" interference scenario and provide the minimum design requirements for SETI equipment that must function in the real and noisy environment. This paper describes the data collection and analysis along with some preliminary conclusions concerning the nature of the interference environment at Jodrell Bank.

  12. Constraining high-redshift X-ray sources with next generation 21-cm power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Mesinger, Andrei; Dillon, Joshua S.; Liu, Adrian; Pober, Jonathan

    2016-05-01

    We use the Fisher matrix formalism and seminumerical simulations to derive quantitative predictions of the constraints that power spectrum measurements on next-generation interferometers, such as the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometre Array (SKA), will place on the characteristics of the X-ray sources that heated the high-redshift intergalactic medium. Incorporating observations between z = 5 and 25, we find that the proposed 331 element HERA and SKA phase 1 will be capable of placing ≲ 10 per cent constraints on the spectral properties of these first X-ray sources, even if one is unable to perform measurements within the foreground contaminated `wedge' or the FM band. When accounting for the enhancement in power spectrum amplitude from spin temperature fluctuations, we find that the observable signatures of reionization extend well beyond the peak in the power spectrum usually associated with it. We also find that lower redshift degeneracies between the signatures of heating and reionization physics lead to errors on reionization parameters that are significantly greater than previously predicted. Observations over the heating epoch are able to break these degeneracies and improve our constraints considerably. For these two reasons, 21-cm observations during the heating epoch significantly enhance our understanding of reionization as well.

  13. Development of a Toy Interferometer for Education and Observation of Sun at 21 cm

    NASA Astrophysics Data System (ADS)

    Park, Yong-Sun; Kim, Chang Hee; Choi, Sang In; Lee, Joo Young; Jang, Woo Min; Kim, Woo Yeon; Jeong, Dae Heon

    2008-06-01

    As a continuation of a previous work by Park et al. (2006), we have developed a two-element radio interferometer that can measure both the phase and amplitude of a visibility function. Two small radio telescopes with diameters of 2.3 m are used as before, but this time an external reference oscillator is shared by the two telescopes so that the local oscillator frequencies are identical. We do not use a hardware correlator; instead we record signals from the two telescopes onto a PC and then perform software correlation. Complex visibilities are obtained toward the sun at λ=21 cm, for 24 baselines with the use of the earth rotation and positional changes of one element, where the maximum baseline length projected onto UV plane is ˜ 90 λ. As expected, the visibility amplitude decreases with the baseline length, while the phase is almost constant. The image obtained by the Fourier transformation of the visibility function nicely delineates the sun, which is barely resolved due to the limited baseline length. The experiment demonstrates that this system can be used as a ``toy'' interferometer at least for the education of (under)graduate students.

  14. Scintillation noise power spectrum and its impact on high-redshift 21-cm observations

    NASA Astrophysics Data System (ADS)

    Vedantham, H. K.; Koopmans, L. V. E.

    2016-05-01

    Visibility scintillation resulting from wave propagation through the turbulent ionosphere can be an important source of noise at low radio frequencies (ν ≲ 200 MHz). Many low-frequency experiments are underway to detect the power spectrum of brightness temperature fluctuations of the neutral-hydrogen 21-cm signal from the Epoch of Reionization (EoR: 12 ≳ z ≳ 7, 100 ≲ ν ≲ 175 MHz). In this paper, we derive scintillation noise power spectra in such experiments while taking into account the effects of typical data processing operations such as self-calibration and Fourier synthesis. We find that for minimally redundant arrays such as LOFAR and MWA, scintillation noise is of the same order of magnitude as thermal noise, has a spectral coherence dictated by stretching of the snapshot uv-coverage with frequency, and thus is confined to the well-known wedge-like structure in the cylindrical (two-dimensional) power spectrum space. Compact, fully redundant (dcore ≲ rF ≈ 300 m at 150 MHz) arrays such as HERA and SKA-LOW (core) will be scintillation noise dominated at all baselines, but the spatial and frequency coherence of this noise will allow it to be removed along with spectrally smooth foregrounds.

  15. Empirical covariance modeling for 21 cm power spectrum estimation: A method demonstration and new limits from early Murchison Widefield Array 128-tile data

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Neben, Abraham R.; Hewitt, Jacqueline N.; Tegmark, Max; Barry, N.; Beardsley, A. P.; Bowman, J. D.; Briggs, F.; Carroll, P.; de Oliveira-Costa, A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hernquist, L.; Hurley-Walker, N.; Jacobs, D. C.; Kim, H. S.; Kittiwisit, P.; Lenc, E.; Line, J.; Loeb, A.; McKinley, B.; Mitchell, D. A.; Morales, M. F.; Offringa, A. R.; Paul, S.; Pindor, B.; Pober, J. C.; Procopio, P.; Riding, J.; Sethi, S.; Shankar, N. Udaya; Subrahmanyan, R.; Sullivan, I.; Thyagarajan, Nithyanandan; Tingay, S. J.; Trott, C.; Wayth, R. B.; Webster, R. L.; Wyithe, S.; Bernardi, G.; Cappallo, R. J.; Deshpande, A. A.; Johnston-Hollitt, M.; Kaplan, D. L.; Lonsdale, C. J.; McWhirter, S. R.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Srivani, K. S.; Williams, A.; Williams, C. L.

    2015-06-01

    The separation of the faint cosmological background signal from bright astrophysical foregrounds remains one of the most daunting challenges of mapping the high-redshift intergalactic medium with the redshifted 21 cm line of neutral hydrogen. Advances in mapping and modeling of diffuse and point source foregrounds have improved subtraction accuracy, but no subtraction scheme is perfect. Precisely quantifying the errors and error correlations due to missubtracted foregrounds allows for both the rigorous analysis of the 21 cm power spectrum and for the maximal isolation of the "EoR window" from foreground contamination. We present a method to infer the covariance of foreground residuals from the data itself in contrast to previous attempts at a priori modeling. We demonstrate our method by setting limits on the power spectrum using a 3 h integration from the 128-tile Murchison Widefield Array. Observing between 167 and 198 MHz, we find at 95% confidence a best limit of Δ2(k )<3.7 ×104 mK2 at comoving scale k =0.18 h Mpc-1 and at z =6.8 , consistent with existing limits.

  16. Sensitive 21cm Observations of Neutral Hydrogen in the Local Group near M31

    NASA Astrophysics Data System (ADS)

    Wolfe, Spencer A.; Lockman, Felix J.; Pisano, D. J.

    2016-01-01

    Very sensitive 21 cm H i measurements have been made at several locations around the Local Group galaxy M31 using the Green Bank Telescope at an angular resolution of 9.‧1, with a 5σ detection level of NH i = 3.9 × 1017 cm-2 for a 30 km s-1 line. Most of the H i in a 12 square-degree area almost equidistant between M31 and M33 is contained in nine discrete clouds that have a typical size of a few kpc and a H i mass of 105M⊙. Their velocities in the Local Group Standard of Rest lie between -100 and +40 km s-1, comparable to the systemic velocities of M31 and M33. The clouds appear to be isolated kinematically and spatially from each other. The total H i mass of all nine clouds is 1.4 × 106M⊙ for an adopted distance of 800 kpc, with perhaps another 0.2 × 106M⊙ in smaller clouds or more diffuse emission. The H i mass of each cloud is typically three orders of magnitude less than the dynamical (virial) mass needed to bind the cloud gravitationally. Although they have the size and H i mass of dwarf galaxies, the clouds are unlikely to be part of the satellite system of the Local Group, as they lack stars. To the north of M31, sensitive H i measurements on a coarse grid find emission that may be associated with an extension of the M31 high-velocity cloud (HVC) population to projected distances of ˜100 kpc. An extension of the M31 HVC population at a similar distance to the southeast, toward M33, is not observed.

  17. A Flux Scale for Southern Hemisphere 21 cm Epoch of Reionization Experiments

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki; Bowman, Judd; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Klima, Pat; MacMahon, Dave H. E.; Manley, Jason R.; Moore, David F.; Pober, Jonathan C.; Stefan, Irina I.; Walbrugh, William P.

    2013-10-01

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from -46° to -40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of -0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  18. The impact of spin-temperature fluctuations on the 21-cm moments

    NASA Astrophysics Data System (ADS)

    Watkinson, C. A.; Pritchard, J. R.

    2015-12-01

    This paper considers the impact of Lyman α coupling and X-ray heating on the 21-cm brightness-temperature one-point statistics (as predicted by seminumerical simulations). The X-ray production efficiency is varied over four orders of magnitude and the hardness of the X-ray spectrum is varied from that predicted for high-mass X-ray binaries, to the softer spectrum expected from the hot interstellar medium. We find peaks in the redshift evolution of both the variance and skewness associated with the efficiency of X-ray production. The amplitude of the variance is also sensitive to the hardness of the X-ray spectral energy distribution. We find that the relative timing of the coupling and heating phases can be inferred from the redshift extent of a plateau that connects a peak in the variance's evolution associated with Lyman α coupling to the heating peak. Importantly, we find that late X-ray heating would seriously hamper our ability to constrain reionization with the variance. Late X-ray heating also qualitatively alters the evolution of the skewness, providing a clean way to constrain such models. If foregrounds can be removed, we find that LOFAR, MWA and PAPER could constrain reionization and late X-ray heating models with the variance. We find that HERA and SKA (phase 1) will be able to constrain both reionization and heating by measuring the variance using foreground-avoidance techniques. If foregrounds can be removed they will also be able to constrain the nature of Lyman α coupling.

  19. A Practical Theorem on Using Interferometry to Measure the Global 21-cm Signal

    NASA Astrophysics Data System (ADS)

    Venumadhav, Tejaswi; Chang, Tzu-Ching; Doré, Olivier; Hirata, Christopher M.

    2016-08-01

    The sky-averaged, or global, background of redshifted 21 cm radiation is expected to be a rich source of information on cosmological reheating and reionization. However, measuring the signal is technically challenging: one must extract a small, frequency-dependent signal from under much brighter spectrally smooth foregrounds. Traditional approaches to study the global signal have used single antennas, which require one to calibrate out the frequency-dependent structure in the overall system gain (due to internal reflections, for example) as well as remove the noise bias from auto-correlating a single amplifier output. This has motivated proposals to measure the signal using cross-correlations in interferometric setups, where additional calibration techniques are available. In this paper we focus on the general principles driving the sensitivity of the interferometric setups to the global signal. We prove that this sensitivity is directly related to two characteristics of the setup: the cross-talk between readout channels (i.e., the signal picked up at one antenna when the other one is driven) and the correlated noise due to thermal fluctuations of lossy elements (e.g., absorbers or the ground) radiating into both channels. Thus in an interferometric setup, one cannot suppress cross-talk and correlated thermal noise without reducing sensitivity to the global signal by the same factor—instead, the challenge is to characterize these effects and their frequency dependence. We illustrate our general theorem by explicit calculations within toy setups consisting of two short-dipole antennas in free space and above a perfectly reflecting ground surface, as well as two well-separated identical lossless antennas arranged to achieve zero cross-talk.

  20. Models of the Cosmological 21 cm Signal from the Epoch of Reionization Calibrated with Lyα and CMB Data

    NASA Astrophysics Data System (ADS)

    Kulkarni, Girish; Choudhury, Tirthankar Roy; Puchwein, Ewald; Haehnelt, Martin G.

    2016-08-01

    We present here 21 cm predictions from high dynamic range simulations for a range of reionization histories that have been tested against available Lyα and CMB data. We assess the observability of the predicted spatial 21 cm fluctuations by ongoing and upcoming experiments in the late stages of reionization in the limit in which the hydrogen spin temperature is significantly larger than the CMB temperature. Models consistent with the available Lyα data and CMB measurement of the Thomson optical depth predict typical values of 10-20 mK2 for the variance of the 21 cm brightness temperature at redshifts z = 7-10 at scales accessible to ongoing and upcoming experiments (k ≲ 1 cMpc-1h). This is within a factor of a few magnitude of the sensitivity claimed to have been already reached by ongoing experiments in the signal rms value. Our different models for the reionization history make markedly different predictions for the redshift evolution and thus frequency dependence of the 21 cm power spectrum and should be easily discernible by LOFAR (and later HERA and SKA1) at their design sensitivity. Our simulations have sufficient resolution to assess the effect of high-density Lyman limit systems that can self-shield against ionizing radiation and stay 21 cm bright even if the hydrogen in their surroundings is highly ionized. Our simulations predict that including the effect of the self-shielded gas in highly ionized regions reduces the large scale 21 cm power by about 30%.

  1. The Evolution Of 21 cm Structure (EOS): public, large-scale simulations of Cosmic Dawn and reionization

    NASA Astrophysics Data System (ADS)

    Mesinger, Andrei; Greig, Bradley; Sobacchi, Emanuele

    2016-07-01

    We introduce the Evolution Of 21 cm Structure (EOS) project: providing periodic, public releases of the latest cosmological 21 cm simulations. 21 cm interferometry is set to revolutionize studies of the Cosmic Dawn (CD) and Epoch of Reionization (EoR). Progress will depend on sophisticated data analysis pipelines, initially tested on large-scale mock observations. Here we present the 2016 EOS release: 10243, 1.6 Gpc, 21 cm simulations of the CD and EoR, calibrated to the Planck 2015 measurements. We include calibrated, sub-grid prescriptions for inhomogeneous recombinations and photoheating suppression of star formation in small-mass galaxies. Leaving the efficiency of supernovae feedback as a free parameter, we present two runs which bracket the contribution from faint unseen galaxies. From these two extremes, we predict that the duration of reionization (defined as a change in the mean neutral fraction from 0.9 to 0.1) should be between 2.7 ≲ Δzre ≲ 5.7. The large-scale 21 cm power during the advanced EoR stages can be different by up to a factor of ˜10, depending on the model. This difference has a comparable contribution from (i) the typical bias of sources and (ii) a more efficient negative feedback in models with an extended EoR driven by faint galaxies. We also present detectability forecasts. With a 1000 h integration, Hydrogen Epoch of Reionization Array and (Square Kilometre Array phase 1) SKA1 should achieve a signal-to-noise of ˜few to hundreds throughout the EoR/CD. We caution that our ability to clean foregrounds determines the relative performance of narrow/deep versus wide/shallow surveys expected with SKA1. Our 21-cm power spectra, simulation outputs and visualizations are publicly available.

  2. NEW EVIDENCE FOR MASS LOSS FROM {delta} CEPHEI FROM H I 21 cm LINE OBSERVATIONS

    SciTech Connect

    Matthews, L. D.; Marengo, M.; Evans, N. R.; Bono, G.

    2012-01-01

    Recently published Spitzer Space Telescope observations of the classical Cepheid archetype {delta} Cephei revealed an extended dusty nebula surrounding this star and its hot companion HD 213307. At far-infrared wavelengths, the emission resembles a bow shock aligned with the direction of space motion of the star, indicating that {delta} Cephei is undergoing mass loss through a stellar wind. Here we report H I 21 cm line observations with the Very Large Array (VLA) to search for neutral atomic hydrogen associated with this wind. Our VLA data reveal a spatially extended H I nebula ({approx}13' or 1 pc across) surrounding the position of {delta} Cephei. The nebula has a head-tail morphology, consistent with circumstellar ejecta shaped by the interaction between a stellar wind and the interstellar medium (ISM). We directly measure a mass of circumstellar atomic hydrogen M{sub H{sub i}}{approx}0.07 M{sub sun}, although the total H I mass may be larger, depending on the fraction of circumstellar material that is hidden by Galactic contamination within our band or that is present on angular scales too large to be detected by the VLA. It appears that the bulk of the circumstellar gas has originated directly from the star, although it may be augmented by material swept from the surrounding ISM. The H I data are consistent with a stellar wind with an outflow velocity V{sub o} = 35.6 {+-} 1.2 km s{sup -1} and a mass-loss rate of M-dot {approx}(1.0{+-}0.8) Multiplication-Sign 10{sup -6} M{sub sun} yr{sup -1}. We have computed theoretical evolutionary tracks that include mass loss across the instability strip and show that a mass-loss rate of this magnitude, sustained over the preceding Cepheid lifetime of {delta} Cephei, could be sufficient to resolve a significant fraction of the discrepancy between the pulsation and evolutionary masses for this star.

  3. A FLUX SCALE FOR SOUTHERN HEMISPHERE 21 cm EPOCH OF REIONIZATION EXPERIMENTS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.; Ali, Zaki; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, Dave H. E.; Gugliucci, Nicole E.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2013-10-20

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  4. Light-cone anisotropy in the 21 cm signal from the epoch of reionization

    NASA Astrophysics Data System (ADS)

    Zawada, Karolina; Semelin, Benoît; Vonlanthen, Patrick; Baek, Sunghye; Revaz, Yves

    2014-04-01

    Using a suite of detailed numerical simulations, we estimate the level of anisotropy generated by the time evolution along the light cone of the 21 cm signal from the epoch of reionization. Our simulations include the physics necessary to model the signal during both the late emission regime and the early absorption regime, namely X-ray and Lyman band 3D radiative transfer in addition to the usual dynamics and ionizing UV transfer. The signal is analysed using correlation functions perpendicular and parallel to the line of sight. We reproduce general findings from previous theoretical studies: the overall amplitude of the correlations and the fact that the light-cone anisotropy is visible only on large scales (100 comoving Mpc). However, the detailed behaviour is different. We find that, at three different epochs, the amplitudes of the correlations along and perpendicular to the line of sight differ from each other, indicating anisotropy. We show that these three epochs are associated with three events of the global reionization history: the overlap of ionized bubbles, the onset of mild heating by X-rays in regions around the sources, and the onset of efficient Lyman α coupling in regions around the sources. We find that a 20 × 20 deg2 survey area may be necessary to mitigate sample variance when we use the directional correlation functions. On a 100 Mpc (comoving) scale, we show that the light-cone anisotropy dominates over the anisotropy generated by peculiar velocity gradients computed in the linear regime. By modelling instrumental noise and limited resolution, we find that the anisotropy should be easily detectable by the Square Kilometre Array, assuming perfect foreground removal, the limiting factor being a large enough survey size. In the case of the Low-Frequency Array for radio astronomy, it is likely that only one anisotropy episode (ionized bubble overlap) will fall in the observing frequency range. This episode will be detectable only if sample

  5. Probing primordial non-Gaussianity: the 3D Bispectrum of Ly-α forest and the redshifted 21-cm signal from the post reionization epoch

    SciTech Connect

    Sarkar, Tapomoy Guha; Hazra, Dhiraj Kumar E-mail: dhiraj@apctp.org

    2013-04-01

    We explore possibility of using the three dimensional bispectra of the Ly-α forest and the redshifted 21-cm signal from the post-reionization epoch to constrain primordial non-Gaussianity. Both these fields map out the large scale distribution of neutral hydrogen and maybe treated as tracers of the underlying dark matter field. We first present the general formalism for the auto and cross bispectrum of two arbitrary three dimensional biased tracers and then apply it to the specific case. We have modeled the 3D Ly-α transmitted flux field as a continuous tracer sampled along 1D skewers which corresponds to quasars sight lines. For the post reionization 21-cm signal we have used a linear bias model. We use a Fisher matrix analysis to present the first prediction for bounds on f{sub NL} and the other bias parameters using the three dimensional 21-cm bispectrum and other cross bispectra. The bounds on f{sub NL} depend on the survey volume, and the various observational noises. We have considered a BOSS like Ly-α survey where the average number density of quasars n-bar = 10{sup −3}Mpc{sup −2} and the spectra are measured at a 2-σ level. For the 21-cm signal we have considered a 4000 hrs observation with a futuristic SKA like radio array. We find that bounds on f{sub NL} obtained in our analysis (6 ≤ Δf{sub NL} ≤ 65) is competitive with CMBR and galaxy surveys and may prove to be an important alternative approach towards constraining primordial physics using future data sets. Further, we have presented a hierarchy of power of the bispectrum-estimators towards detecting the f{sub NL}. Given the quality of the data sets, one may use this method to optimally choose the right estimator and thereby provide better constraints on f{sub NL}. We also find that by combining the various cross-bispectrum estimators it is possible to constrain f{sub NL} at a level Δf{sub NL} ∼ 4.7. For the equilateral and orthogonal template we obtain Δf{sub NL}{sup equ} ∼ 17 and

  6. Empowering line intensity mapping to study early galaxies

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Ferrara, A.

    2016-09-01

    Line intensity mapping is a superb tool to study the collective radiation from early galaxies. However, the method is hampered by the presence of strong foregrounds, mostly produced by low-redshift interloping lines. We present here a general method to overcome this problem which is robust against foreground residual noise and based on the cross-correlation function ψαL(r) between diffuse line emission and Lyα emitters (LAE). We compute the diffuse line (Lyα is used as an example) emission from galaxies in a (800Mpc)3 box at z = 5.7 and 6.6. We divide the box in slices and populate them with 14000(5500) LAEs at z = 5.7(6.6), considering duty cycles from 10-3 to 1. Both the LAE number density and slice volume are consistent with the expected outcome of the Subaru HSC survey. We add gaussian random noise with variance σN up to 100 times the variance of the Lyα emission, σα, to simulate residual foregrounds and compute ψαL(r). We find that the signal-to-noise of the observed ψαL(r) does not change significantly if σN ≤ 10σα and show that in these conditions the mean line intensity, ILyα, can be precisely recovered independently of the LAE duty cycle. Even if σN = 100σα, Iα can be constrained within a factor 2. The method works equally well for any other line (e.g. [CII], HeII) used for the intensity mapping experiment.

  7. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  8. On Removing Interloper Contamination from Intensity Mapping Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Taylor, Jessie

    2016-07-01

    Line intensity mapping experiments seek to trace large-scale structures by measuring the spatial fluctuations in the combined emission, in some convenient spectral line, from individually unresolved galaxies. An important systematic concern for these surveys is line confusion from foreground or background galaxies emitting in other lines that happen to lie at the same observed frequency as the “target” emission line of interest. We develop an approach to separate this “interloper” emission at the power spectrum level. If one adopts the redshift of the target emission line in mapping from observed frequency and angle on the sky to co-moving units, the interloper emission is mapped to the wrong co-moving coordinates. Because the mapping is different in the line of sight and transverse directions, the interloper contribution to the power spectrum becomes anisotropic, especially if the interloper and target emission are at widely separated redshifts. This distortion is analogous to the Alcock–Paczynski test, but here the warping arises from assuming the wrong redshift rather than an incorrect cosmological model. We apply this to the case of a hypothetical [C ii] emission survey at z˜ 7 and find that the distinctive interloper anisotropy can, in principle, be used to separate strong foreground CO emission fluctuations. In our models, however, a significantly more sensitive instrument than currently planned is required, although there are large uncertainties in forecasting the high-redshift [C ii] emission signal. With upcoming surveys, it may nevertheless be useful to apply this approach after first masking pixels suspected of containing strong interloper contamination.

  9. On Removing Interloper Contamination from Intensity Mapping Power Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Lidz, Adam; Taylor, Jessie

    2016-07-01

    Line intensity mapping experiments seek to trace large-scale structures by measuring the spatial fluctuations in the combined emission, in some convenient spectral line, from individually unresolved galaxies. An important systematic concern for these surveys is line confusion from foreground or background galaxies emitting in other lines that happen to lie at the same observed frequency as the “target” emission line of interest. We develop an approach to separate this “interloper” emission at the power spectrum level. If one adopts the redshift of the target emission line in mapping from observed frequency and angle on the sky to co-moving units, the interloper emission is mapped to the wrong co-moving coordinates. Because the mapping is different in the line of sight and transverse directions, the interloper contribution to the power spectrum becomes anisotropic, especially if the interloper and target emission are at widely separated redshifts. This distortion is analogous to the Alcock-Paczynski test, but here the warping arises from assuming the wrong redshift rather than an incorrect cosmological model. We apply this to the case of a hypothetical [C ii] emission survey at z˜ 7 and find that the distinctive interloper anisotropy can, in principle, be used to separate strong foreground CO emission fluctuations. In our models, however, a significantly more sensitive instrument than currently planned is required, although there are large uncertainties in forecasting the high-redshift [C ii] emission signal. With upcoming surveys, it may nevertheless be useful to apply this approach after first masking pixels suspected of containing strong interloper contamination.

  10. X-rays and hard ultraviolet radiation from the first galaxies: ionization bubbles and 21-cm observations

    NASA Astrophysics Data System (ADS)

    Venkatesan, Aparna; Benson, Andrew

    2011-11-01

    The first stars and quasars are known sources of hard ionizing radiation in the first billion years of the Universe. We examine the joint effects of X-rays and hard ultraviolet (UV) radiation from such first-light sources on the hydrogen and helium reionization of the intergalactic medium (IGM) at early times, and the associated heating. We study the growth and evolution of individual H II, He II and He III regions around early galaxies with first stars and/or quasi-stellar object populations. We find that in the presence of helium-ionizing radiation, X-rays may not dominate the ionization and thermal history of the IGM at z˜ 10-20, contributing relatively modest increases to IGM ionization and heating up to ˜103-105 K in IGM temperatures. We also calculate the 21-cm signal expected from a number of scenarios with metal-free starbursts and quasars in varying combinations and masses at these redshifts. The peak values for the spin temperature reach ˜104-105 K in such cases. The maximum values for the 21-cm brightness temperature are around 30-40 mK in emission, while the net values of the 21-cm absorption signal range from ˜a few to 60 mK on scales of 0.01-1 Mpc. We find that the 21-cm signature of X-ray versus UV ionization could be distinct, with the emission signal expected from X-rays alone occurring at smaller scales than that from UV radiation, resulting from the inherently different spatial scales at which X-ray and UV ionization/heating manifests. This difference is time-dependent and becomes harder to distinguish with an increasing X-ray contribution to the total ionizing photon production. Such differing scale-dependent contributions from X-ray and UV photons may therefore 'blur' the 21-cm signature of the percolation of ionized bubbles around early haloes (depending on whether a cosmic X-ray or UV background is built up first) and affect the interpretation of 21-cm data constraints on reionization.

  11. Hydrogen and the First Stars: First Results from the SCI-HI 21-cm all-sky spectrum experiment

    NASA Astrophysics Data System (ADS)

    Voytek, Tabitha; Peterson, Jeffrey; Lopez-Cruz, Omar; Jauregui-Garcia, Jose-Miguel; SCI-HI Experiment Team

    2015-01-01

    The 'Sonda Cosmologica de las Islas para la Deteccion de Hidrogeno Neutro' (SCI-HI) experiment is an all-sky 21-cm brightness temperature spectrum experiment studying the cosmic dawn (z~15-35). The experiment is a collaboration between Carnegie Mellon University (CMU) and Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) in Mexico. Initial deployment of the SCI-HI experiment occurred in June 2013 on Guadalupe; a small island about 250 km off of the Pacific coast of Baja California in Mexico. Preliminary measurements from this deployment have placed the first observational constraints on the 21-cm all-sky spectrum around 70 MHz (z~20), see Voytek et al (2014).Neutral Hydrogen (HI) is found throughout the universe in the cold gas that makes up the intergalactic medium (IGM). HI can be observed through the spectral line at 21 cm (1.4 GHz) due to hyperfine structure. Expansion of the universe causes the wavelength of this spectral line to stretch at a rate defined by the redshift z, leading to a signal which can be followed through time.Now the strength of the 21-cm signal in the IGM is dependent only on a small number of variables; the temperature and density of the IGM, the amount of HI in the IGM, the UV energy density in the IGM, and the redshift. This means that 21-cm measurements teach us about the history and structure of the IGM. The SCI-HI experiment focuses on the spatially averaged 21-cm spectrum, looking at the temporal evolution of the IGM during the cosmic dawn before reionization.Although the SCI-HI experiment placed first constraints with preliminary data, this data was limited to a narrow frequency regime around 60-85 MHz. This limitation was caused by instrumental difficulties and the presence of residual radio frequency interference (RFI) in the FM radio band (~88-108 MHz). The SCI-HI experiment is currently undergoing improvements and we plan to have another deployment soon. This deployment would be to Socorro and Clarion, two

  12. Observational challenges in Lyα intensity mapping

    NASA Astrophysics Data System (ADS)

    Comaschi, P.; Yue, B.; Ferrara, A.

    2016-09-01

    Intensity mapping (IM) is sensitive to the cumulative line emission of galaxies. As such it represents a promising technique for statistical studies of galaxies fainter than the limiting magnitude of traditional galaxy surveys. The strong hydrogen Lyα line is the primary target for such an experiment, as its intensity is linked to star formation activity and the physical state of the interstellar (ISM) and intergalactic (IGM) medium. However, to extract the meaningful information one has to solve the confusion problems caused by interloping lines from foreground galaxies. We discuss here the challenges for a Lyα IM experiment targeting z > 4 sources. We find that the Lyα power spectrum can be in principle easily (marginally) obtained with a 40 cm space telescope in a few days of observing time up to z ≲ 8 (z ˜ 10) assuming that the interloping lines (e.g. Hα, [O II], [O III] lines) can be efficiently removed. We show that interlopers can be removed by using an ancillary photometric galaxy survey with limiting AB mag ˜26 in the NIR bands (Y, J, H, or K). This would enable detection of the Lyα signal from 5 < z < 9 faint sources. However, if a [C II] IM experiment is feasible, by cross-correlating the Lyα with the [C II] signal the required depth of the galaxy survey can be decreased to AB mag ˜24. This would bring the detection at reach of future facilities working in close synergy.

  13. LOFAR insights into the epoch of reionization from the cross-power spectrum of 21 cm emission and galaxies

    NASA Astrophysics Data System (ADS)

    Wiersma, R. P. C.; Ciardi, B.; Thomas, R. M.; Harker, G. J. A.; Zaroubi, S.; Bernardi, G.; Brentjens, M.; de Bruyn, A. G.; Daiboo, S.; Jelic, V.; Kazemi, S.; Koopmans, L. V. E.; Labropoulos, P.; Martinez, O.; Mellema, G.; Offringa, A.; Pandey, V. N.; Schaye, J.; Veligatla, V.; Vedantham, H.; Yatawatta, S.

    2013-07-01

    Using a combination of N-body simulations, semi-analytic models and radiative transfer calculations, we have estimated the theoretical cross-power spectrum between galaxies and the 21 cm emission from neutral hydrogen during the epoch of reionization. In accordance with previous studies, we find that the 21 cm emission is initially correlated with haloes on large scales (≳30 Mpc), anticorrelated on intermediate (˜5 Mpc) and uncorrelated on small (≲3 Mpc) scales. This picture quickly changes as reionization proceeds and the two fields become anticorrelated on large scales. The normalization of the cross-power spectrum can be used to set constraints on the average neutral fraction in the intergalactic medium and its shape can be a powerful tool to study the topology of reionization. When we apply a drop-out technique to select galaxies and add to the 21 cm signal the noise expected from the LOw Frequency ARray (LOFAR) telescope, we find that while the normalization of the cross-power spectrum remains a useful tool for probing reionization, its shape becomes too noisy to be informative. On the other hand, for an Lyα Emitter (LAE) survey both the normalization and the shape of the cross-power spectrum are suitable probes of reionization. A closer look at a specific planned LAE observing program using Subaru Hyper-Suprime Cam reveals concerns about the strength of the 21 cm signal at the planned redshifts. If the ionized fraction at z ˜ 7 is lower than the one estimated here, then using the cross-power spectrum may be a useful exercise given that at higher redshifts and neutral fractions it is able to distinguish between two toy models with different topologies.

  14. GIANT METREWAVE RADIO TELESCOPE DETECTION OF TWO NEW H I 21 cm ABSORBERS AT z ≈ 2

    SciTech Connect

    Kanekar, N.

    2014-12-20

    I report the detection of H I 21 cm absorption in two high column density damped Lyα absorbers (DLAs) at z ≈ 2 using new wide-band 250-500 MHz receivers on board the Giant Metrewave Radio Telescope. The integrated H I 21 cm optical depths are 0.85 ± 0.16 km s{sup –1} (TXS1755+578) and 2.95 ± 0.15 km s{sup –1} (TXS1850+402). For the z = 1.9698 DLA toward TXS1755+578, the difference in H I 21 cm and C I profiles and the weakness of the radio core suggest that the H I 21cm absorption arises toward radio components in the jet, and that the optical and radio sightlines are not the same. This precludes an estimate of the DLA spin temperature. For the z = 1.9888 DLA toward TXS1850+402, the absorber covering factor is likely to be close to unity, as the background source is extremely compact, with the entire 5 GHz emission arising from a region of ≤ 1.4 mas in size. This yields a DLA spin temperature of T{sub s} = (372 ± 18) × (f/1.0) K, lower than typical T{sub s} values in high-z DLAs. This low spin temperature and the relatively high metallicity of the z = 1.9888 DLA ([Zn/H] =(– 0.68 ± 0.04)) are consistent with the anti-correlation between metallicity and spin temperature that has been found earlier in damped Lyα systems.

  15. e-MERLIN 21 cm constraints on the mass-loss rates of OB stars in Cyg OB2

    NASA Astrophysics Data System (ADS)

    Morford, J. C.; Fenech, D. M.; Prinja, R. K.; Blomme, R.; Yates, J. A.

    2016-11-01

    We present e-MERLIN 21 cm (L-band) observations of single luminous OB stars in the Cygnus OB2 association, from the Cyg OB2 Radio Survey Legacy programme. The radio observations potentially offer the most straightforward, least model-dependent, determinations of mass-loss rates, and can be used to help resolve current discrepancies in mass-loss rates via clumped and structured hot star winds. We report here that the 21 cm flux densities of O3 to O6 supergiant and giant stars are less than ˜70 μJy. These fluxes may be translated to `smooth' wind mass-loss upper limits of ˜4.4-4.8 × 10-6 M⊙ yr -1 for O3 supergiants and ≲2.9 × 10-6 M⊙ yr -1 for B0 to B1 supergiants. The first ever resolved 21 cm detections of the hypergiant (and luminous blue variable candidate) Cyg OB2 #12 are discussed; for multiple observations separated by 14 d, we detect an ˜69 per cent increase in its flux density. Our constraints on the upper limits for the mass-loss rates of evolved OB stars in Cyg OB2 support the model that the inner wind region close to the stellar surface (where Hα forms) is more clumped than the very extended geometric region sampled by our radio observations.

  16. Precise Measurement of the Reionization Optical Depth from the Global 21 cm Signal Accounting for Cosmic Heating

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Loeb, Abraham

    2016-04-01

    As a result of our limited data on reionization, the total optical depth for electron scattering, τ, limits precision measurements of cosmological parameters from the Cosmic Microwave Background (CMB). It was recently shown that the predicted 21 cm signal of neutral hydrogen contains enough information to reconstruct τ with sub-percent accuracy, assuming that the neutral gas was much hotter than the CMB throughout the entire epoch of reionization (EoR). Here we relax this assumption and use the global 21 cm signal alone to extract τ for realistic X-ray heating scenarios. We test our model-independent approach using mock data for a wide range of ionization and heating histories and show that an accurate measurement of the reionization optical depth at a sub-percent level is possible in most of the considered scenarios even when heating is not saturated during the EoR, assuming that the foregrounds are mitigated. However, we find that in cases where heating sources had hard X-ray spectra and their luminosity was close to or lower than what is predicted based on low-redshift observations, the global 21 cm signal alone is not a good tracer of the reionization history.

  17. e-MERLIN 21cm constraints on the mass-loss rates of OB stars in Cyg OB2

    NASA Astrophysics Data System (ADS)

    Morford, J. C.; Fenech, D. M.; Prinja, R. K.; Blomme, R.; Yates, J. A.

    2016-08-01

    We present e-MERLIN 21 cm (L-band) observations of single luminous OB stars in the Cygnus OB2 association, from the COBRaS Legacy programme. The radio observations potentially offer the most straightforward, least model-dependent, determinations of mass-loss rates, and can be used to help resolve current discrepancies in mass-loss rates via clumped and structured hot star winds. We report here that the 21 cm flux densities of O3 to O6 supergiant and giant stars are less than ˜ 70 μJy. These fluxes may be translated to `smooth' wind mass-loss upper limits of ˜ 4.4 - 4.8 × 10-6 M⊙ yr -1 for O3 supergiants and ≲ 2.9 × 10-6 M⊙ yr -1 for B0 to B1 supergiants. The first ever resolved 21 cm detections of the hypergiant (and LBV candidate) Cyg OB2 #12 are discussed; for multiple observations separated by 14 days, we detect a ˜ 69% increase in its flux density. Our constraints on the upper limits for the mass-loss rates of evolved OB stars in Cyg OB2 support the model that the inner wind region close to the stellar surface (where Hα forms) is more clumped than the very extended geometric region sampled by our radio observations.

  18. DEEP 21 cm H I OBSERVATIONS AT z {approx} 0.1: THE PRECURSOR TO THE ARECIBO ULTRA DEEP SURVEY

    SciTech Connect

    Freudling, Wolfram; Zwaan, Martin; Staveley-Smith, Lister; Meyer, Martin; Catinella, Barbara; Minchin, Robert; Calabretta, Mark; Momjian, Emmanuel; O'Neil, Karen

    2011-01-20

    The 'ALFA Ultra Deep Survey' (AUDS) is an ongoing 21 cm spectral survey with the Arecibo 305 m telescope. AUDS will be the most sensitive blind survey undertaken with Arecibo's 300 MHz Mock spectrometer. The survey searches for 21 cm H I line emission at redshifts between 0 and 0.16. The main goals of the survey are to investigate the H I content and probe the evolution of H I gas within that redshift region. In this paper, we report on a set of precursor observations with a total integration time of 53 hr. The survey detected a total of eighteen 21 cm emission lines at redshifts between 0.07 and 0.15 in a region centered around {alpha}{sub 2000} {approx} 0{sup h}, {delta} {approx} 15{sup 0}42'. The rate of detection is consistent with the one expected from the local H I mass function. The derived relative H I density at the median redshift of the survey is {rho}{sub H{sub I}}[z = 0.125] = (1.0 {+-} 0.3){rho}{sub 0}, where {rho}{sub 0} is the H I density at zero redshift.

  19. From Darkness to Light: Observing the First Stars and Galaxies with the Redshifted 21-cm Line using the Dark Ages Radio Explorer

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Lazio, Joseph; Bowman, Judd D.; Bradley, Richard F.; Datta, Abhirup; Furlanetto, Steven; Jones, Dayton L.; Kasper, Justin; Loeb, Abraham; Harker, Geraint

    2015-01-01

    The Dark Ages Radio Explorer (DARE) will reveal when the first stars, black holes, and galaxies formed in the early Universe and will define their characteristics, from the Dark Ages (z=35) to the Cosmic Dawn (z=11). This epoch of the Universe has never been directly observed. The DARE science instrument is composed of electrically-short bi-conical dipole antennas, a correlation receiver, and a digital spectrometer that measures the sky-averaged, low frequency (40-120 MHz) spectral features from the highly redshifted 21-cm HI line that surrounds the first objects. These observations are possible because DARE will orbit the Moon at an altitude of 125 km and takes data when it is above the radio-quiet, ionosphere-free, solar-shielded lunar farside. DARE executes the small-scale mission described in the NASA Astrophysics Roadmap (p. 83): 'mapping the Universe's hydrogen clouds using 21-cm radio wavelengths via lunar orbiter from the farside of the Moon'. This mission will address four key science questions: (1) When did the first stars form and what were their characteristics? (2) When did the first accreting black holes form and what was their characteristic mass? (3) When did reionization begin? (4) What surprises emerged from the Dark Ages (e.g., Dark Matter decay). DARE uniquely complements other major telescopes including Planck, JWST, and ALMA by bridging the gap between the smooth Universe seen via the CMB and rich web of galaxy structures seen with optical/IR/mm telescopes. Support for the development of this mission concept was provided by the Office of the Director, NASA Ames Research Center and by JPL/Caltech.

  20. A SENSITIVITY AND ARRAY-CONFIGURATION STUDY FOR MEASURING THE POWER SPECTRUM OF 21 cm EMISSION FROM REIONIZATION

    SciTech Connect

    Parsons, Aaron; Pober, Jonathan; McQuinn, Matthew; Jacobs, Daniel; Aguirre, James

    2012-07-01

    Telescopes aiming to measure 21 cm emission from the Epoch of Reionization must toe a careful line, balancing the need for raw sensitivity against the stringent calibration requirements for removing bright foregrounds. It is unclear what the optimal design is for achieving both of these goals. Via a pedagogical derivation of an interferometer's response to the power spectrum of 21 cm reionization fluctuations, we show that even under optimistic scenarios first-generation arrays will yield low-signal-to-noise detections, and that different compact array configurations can substantially alter sensitivity. We explore the sensitivity gains of array configurations that yield high redundancy in the uv-plane-configurations that have been largely ignored since the advent of self-calibration for high-dynamic-range imaging. We first introduce a mathematical framework to generate optimal minimum-redundancy configurations for imaging. We contrast the sensitivity of such configurations with high-redundancy configurations, finding that high-redundancy configurations can improve power-spectrum sensitivity by more than an order of magnitude. We explore how high-redundancy array configurations can be tuned to various angular scales, enabling array sensitivity to be directed away from regions of the uv-plane (such as the origin) where foregrounds are brighter and instrumental systematics are more problematic. We demonstrate that a 132 antenna deployment of the Precision Array for Probing the Epoch of Reionization observing for 120 days in a high-redundancy configuration will, under ideal conditions, have the requisite sensitivity to detect the power spectrum of the 21 cm signal from reionization at a 3{sigma} level at k < 0.25 h Mpc{sup -1} in a bin of {Delta}ln k = 1. We discuss the tradeoffs of low- versus high-redundancy configurations.

  1. INTERPRETING THE GLOBAL 21-cm SIGNAL FROM HIGH REDSHIFTS. II. PARAMETER ESTIMATION FOR MODELS OF GALAXY FORMATION

    SciTech Connect

    Mirocha, Jordan; Burns, Jack O.; Harker, Geraint J. A.

    2015-11-01

    Following our previous work, which related generic features in the sky-averaged (global) 21-cm signal to properties of the intergalactic medium, we now investigate the prospects for constraining a simple galaxy formation model with current and near-future experiments. Markov-Chain Monte Carlo fits to our synthetic data set, which includes a realistic galactic foreground, a plausible model for the signal, and noise consistent with 100 hr of integration by an ideal instrument, suggest that a simple four-parameter model that links the production rate of Lyα, Lyman-continuum, and X-ray photons to the growth rate of dark matter halos can be well-constrained (to ∼0.1 dex in each dimension) so long as all three spectral features expected to occur between 40 ≲ ν/MHz ≲ 120 are detected. Several important conclusions follow naturally from this basic numerical result, namely that measurements of the global 21-cm signal can in principle (i) identify the characteristic halo mass threshold for star formation at all redshifts z ≳ 15, (ii) extend z ≲ 4 upper limits on the normalization of the X-ray luminosity star formation rate (L{sub X}–SFR) relation out to z ∼ 20, and (iii) provide joint constraints on stellar spectra and the escape fraction of ionizing radiation at z ∼ 12. Though our approach is general, the importance of a broadband measurement renders our findings most relevant to the proposed Dark Ages Radio Explorer, which will have a clean view of the global 21-cm signal from ∼40 to 120 MHz from its vantage point above the radio-quiet, ionosphere-free lunar far-side.

  2. Constraints on the temperature of the intergalactic medium at z = 8.4 with 21-cm observations

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei; Pober, Jonathan C.

    2016-02-01

    We compute robust lower limits on the spin temperature, TS, of the z = 8.4 intergalactic medium (IGM), implied by the upper limits on the 21-cm power spectrum recently measured by PAPER-64. Unlike previous studies which used a single epoch of reionization (EoR) model, our approach samples a large parameter space of EoR models: the dominant uncertainty when estimating constraints on TS. Allowing TS to be a free parameter and marginalizing over EoR parameters in our Markov Chain Monte Carlo code 21CMMC, we infer TS ≥ 3 K (corresponding approximately to 1σ) for a mean IGM neutral fraction of bar{x}_{HI}≳ 0.1. We further improve on these limits by folding-in additional EoR constraints based on: (i) the dark fraction in QSO spectra, which implies a strict upper limit of bar{x}_{HI}[z=5.9]≤ 0.06+0.05 (1σ ); and (ii) the electron scattering optical depth, τe = 0.066 ± 0.016 (1σ) measured by the Planck satellite. By restricting the allowed EoR models, these additional observations tighten the approximate 1σ lower limits on the spin temperature to TS ≥ 6 K. Thus, even such preliminary 21-cm observations begin to rule out extreme scenarios such as `cold reionization', implying at least some prior heating of the IGM. The analysis framework developed here can be applied to upcoming 21-cm observations, thereby providing unique insights into the sources which heated and subsequently reionized the very early Universe.

  3. Accurate measurement of the H I column density from H I 21 cm absorption-emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Chengalur, Jayaram N.; Kanekar, Nissim; Roy, Nirupam

    2013-07-01

    We present a detailed study of an estimator of the H I column density, based on a combination of H I 21 cm absorption and H I 21cm emission spectroscopy. This `isothermal' estimate is given by NHI, ISO = 1.823 × 1018 ∫ [τtot × TB / [ 1 - e-τtot]dV, where τtot is the total H I 21cm optical depth along the sightline and TB is the measured brightness temperature. We have used a Monte Carlo simulation to quantify the accuracy of the isothermal estimate by comparing the derived NHI, ISO with the true H I column density NHI. The simulation was carried out for a wide range of sightlines, including gas in different temperature phases and random locations along the path. We find that the results are statistically insensitive to the assumed gas temperature distribution and the positions of different phases along the line of sight. The median value of the ratio of the true H I column density to the isothermal estimate, NHI/NHI, ISO, is within a factor of 2 of unity while the 68.2 per cent confidence intervals are within a factor of ≈3 of unity, out to high H I column densities, ≤5 × 1023 cm-2 per 1 km s-1 channel, and high total optical depths, ≤1000. The isothermal estimator thus provides a significantly better measure of the H I column density than other methods, within a factor of a few of the true value even at the highest columns, and should allow us to directly probe the existence of high H I column density gas in the Milky Way.

  4. Multi-redshift limits on the 21cm power spectrum from PAPER 64: XRays in the early universe

    NASA Astrophysics Data System (ADS)

    Kolopanis, Matthew; Jacobs, Danny; PAPER Collaboration

    2016-06-01

    Here we present new constraints on 21cm emission from cosmic reionization from the 64 element deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). These results extend the single redshift 8.4 result presented in Ali et al 2015 to include redshifts from 7.3 to 10.9. These new limits offer as much as a factor of 4 improvement in sensitivity compared to previous 32 element PAPER results by Jacobs et al (2015). Using these limits we place constraints on a parameterized model of heating due to XRays emitted by early collapsed objects.

  5. Invisible Active Galactic Nuclei. II. Radio Morphologies and Five New H i 21cm Absorption Line Detectors

    NASA Astrophysics Data System (ADS)

    Yan, Ting; Stocke, John T.; Darling, Jeremy; Momjian, Emmanuel; Sharma, Soniya; Kanekar, Nissim

    2016-03-01

    This is the second paper directed toward finding new highly redshifted atomic and molecular absorption lines at radio frequencies. To this end, we selected a sample of 80 candidates for obscured radio-loud active galactic nuclei (AGNs) and presented their basic optical/near-infrared (NIR) properties in Paper I. In this paper, we present both high-resolution radio continuum images for all of these sources and H i 21 cm absorption spectroscopy for a few selected sources in this sample. A-configuration 4.9 and 8.5 GHz Very Large Array continuum observations find that 52 sources are compact or have substantial compact components with size <0.″5 and flux densities >0.1 Jy at 4.9 GHz. The 36 most compact sources were then observed with the Very Long Baseline Array at 1.4 GHz. One definite and 10 candidate Compact Symmetric Objects (CSOs) are newly identified, which is a detection rate of CSOs ∼three times higher than the detection rate previously found in purely flux-limited samples. Based on possessing compact components with high flux densities, 60 of these sources are good candidates for absorption-line searches. Twenty-seven sources were observed for H i 21 cm absorption at their photometric or spectroscopic redshifts with only six detections (five definite and one tentative). However, five of these were from a small subset of six CSOs with pure galaxy optical/NIR spectra (i.e., any AGN emission is obscured) and for which accurate spectroscopic redshifts place the redshifted 21 cm line in a radio frequency intereference (RFI)-free spectral “window” (i.e., the percentage of H i 21 cm absorption-line detections could be as high as ∼90% in this sample). It is likely that the presence of ubiquitous RFI and the absence of accurate spectroscopic redshifts preclude H i detections in similar sources (only 1 detection out of the remaining 22 sources observed, 13 of which have only photometric redshifts); that is, H i absorption may well be present but is masked by

  6. Improved constraints on possible variation of physical constants from H i 21-cm and molecular QSO absorption lines

    NASA Astrophysics Data System (ADS)

    Murphy, M. T.; Webb, J. K.; Flambaum, V. V.; Drinkwater, M. J.; Combes, F.; Wiklind, T.

    2001-11-01

    Quasar (QSO) absorption spectra provide an extremely useful probe of possible cosmological variation in various physical constants. Comparison of Hi 21-cm absorption with corresponding molecular (rotational) absorption spectra allows us to constrain variation in [formmu2]y≡α2gp, where α is the fine-structure constant and gp is the proton g-factor. We analyse spectra of two QSOs, PKS 1413+135 and TXS 0218+357, and derive values of [formmu3]Δy/y at absorption redshifts of [formmu4]z=0.2467 and 0.6847 by simultaneous fitting of the Hi 21-cm and molecular lines. We find [formmu5]Δy/y=(-0.20+/-0.44)×10-5 and [formmu6]Δy/y=(-0.16+/-0.54)×10-5 respectively, indicating an insignificantly smaller y in the past. We compare our results with other constraints from the same two QSOs given recently by Drinkwater et al. and Carilli et al., and with our recent optical constraints, which indicated a smaller α at higher redshifts.

  7. Modified Mercalli Intensity Maps for the 1868 Hayward Earthquake Plotted in ShakeMap Format

    USGS Publications Warehouse

    Boatwright, John; Bundock, Howard

    2008-01-01

    To construct the Modified Mercalli Intensity (MMI) ShakeMap for the 1868 Hayward earthquake, we started with two sets of damage descriptions and felt reports. The first set of 100 sites was compiled by A.A. Bullock in the Lawson (1908) report on the 1906 San Francisco earthquake. The second set of 45 sites was compiled by Toppozada et al. (1981) from an extensive search of newspaper archives. We supplemented these two sets of reports with new observations from 30 sites using surveys of cemetery damage, reports of damage to historic adobe structures, pioneer narratives, and reports from newspapers that Toppozada et al. (1981) did not retrieve. The Lawson (1908) and Toppozada et al. (1981) compilations and our contributions are assembled in the Site List.

  8. Detailed 21 cm observations towards the TeV γ-ray SNR HESS J1731-347

    NASA Astrophysics Data System (ADS)

    Fukuda, Tatsuya; Yamamoto, Hiroaki; Fukui, Yasuo; Torii, Kazufumi; Hayakawa, Takahiro; Okuda, Takeshi; Sano, Hidetoshi; Yoshiike, Satoshi

    2012-10-01

    HESS J1731-347 is one of the unique SNRs which show the TeV gamma-ray shell like morphology. Our new study succeeded to reveal the distributions of the ISM proton that coincide well with the TeV gamma-ray distributions and discussed that both the hadronic and leptonic processes are possibly working in this region. However, the present ISM resolutions of a few arcmin of the given data sets are not sufficient to understand the detailed ISM distributions and the origin of the gamma-ray emission. We will use the ATCA to image 21cm radio continuum emission and HI emission in the region to a resolution of ~30" and study the interaction between the shock wave and the ISM in detail. Such results are a crucial component in testing for the origin of SNR gamma-rays.

  9. What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization

    SciTech Connect

    Pober, Jonathan C.; Morales, Miguel F.; Liu, Adrian; McQuinn, Matthew; Parsons, Aaron R.; Dillon, Joshua S.; Hewitt, Jacqueline N.; Tegmark, Max; Aguirre, James E.; Bowman, Judd D.; Jacobs, Daniel C.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Werthimer, Dan J.

    2014-02-20

    A number of experiments are currently working toward a measurement of the 21 cm signal from the epoch of reionization (EoR). Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by the next generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We focus primarily on an instrument modeled after the ∼0.1 km{sup 2} collecting area Hydrogen Epoch of Reionization Array concept design and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described 'wedge' footprint in k space. Uncertainties in the reionization history are accounted for using a series of simulations that vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the intergalactic medium. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that 0.1 km{sup 2} of collecting area is enough to ensure a very high significance (≳ 30σ) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented.

  10. Mapping tillage intensity by integrating multiple remote sensing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tillage practices play an important role in the sustainable agriculture system. Conservative tillage practice can help to reduce soil erosion, increase soil fertility and improve water quality. Tillage practices could be applied at different times with different intensity depending on the local weat...

  11. Data-Intensive Memory-Map simulator and runtime

    SciTech Connect

    Essen, B. B.

    2012-05-01

    DI-MMAP is a simulator for modeling the performance of next generation non-volatile random access memory technologies (NVRAM) and a high-perfromance memory-map runtime for the Linux operating system. It is implemented as a device driver for the Linux operating system. It will be used by algorithm designers to unserstand the impact of future NVRAM on their algorithms and will be used by application developers for high-performance access to NVRAM storage.

  12. Simulating the 21 cm forest detectable with LOFAR and SKA in the spectra of high-z GRBs

    NASA Astrophysics Data System (ADS)

    Ciardi, B.; Inoue, S.; Abdalla, F. B.; Asad, K.; Bernardi, G.; Bolton, J. S.; Brentjens, M.; de Bruyn, A. G.; Chapman, E.; Daiboo, S.; Fernandez, E. R.; Ghosh, A.; Graziani, L.; Harker, G. J. A.; Iliev, I. T.; Jelić, V.; Jensen, H.; Kazemi, S.; Koopmans, L. V. E.; Martinez, O.; Maselli, A.; Mellema, G.; Offringa, A. R.; Pandey, V. N.; Schaye, J.; Thomas, R.; Vedantham, H.; Yatawatta, S.; Zaroubi, S.

    2015-10-01

    We investigate the feasibility of detecting 21 cm absorption features in the afterglow spectra of high redshift long Gamma Ray Bursts (GRBs). This is done employing simulations of cosmic reionization, together with estimates of the GRB radio afterglow flux and the instrumental characteristics of the LOw Frequency ARray (LOFAR). We find that absorption features could be marginally (with a S/N larger than a few) detected by LOFAR at z ≳ 7 if the GRB is a highly energetic event originating from Pop III stars, while the detection would be easier if the noise were reduced by one order of magnitude, i.e. similar to what is expected for the first phase of the Square Kilometre Array (SKA1-low). On the other hand, more standard GRBs are too dim to be detected even with ten times the sensitivity of SKA1-low, and only in the most optimistic case can a S/N larger than a few be reached at z ≳ 9.

  13. Effects of Antenna Beam Chromaticity on Redshifted 21 cm Power Spectrum and Implications for Hydrogen Epoch of Reionization Array

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Parsons, Aaron R.; DeBoer, David R.; Bowman, Judd D.; Ewall-Wice, Aaron M.; Neben, Abraham R.; Patra, Nipanjana

    2016-07-01

    Unaccounted for systematics from foregrounds and instruments can severely limit the sensitivity of current experiments from detecting redshifted 21 cm signals from the Epoch of Reionization (EoR). Upcoming experiments are faced with a challenge to deliver more collecting area per antenna element without degrading the data with systematics. This paper and its companions show that dishes are viable for achieving this balance using the Hydrogen Epoch of Reionization Array (HERA) as an example. Here, we specifically identify spectral systematics associated with the antenna power pattern as a significant detriment to all EoR experiments which causes the already bright foreground power to leak well beyond ideal limits and contaminate the otherwise clean EoR signal modes. A primary source of this chromaticity is reflections in the antenna-feed assembly and between structures in neighboring antennas. Using precise foreground simulations taking wide-field effects into account, we provide a generic framework to set cosmologically motivated design specifications on these reflections to prevent further EoR signal degradation. We show that HERA will not be impeded by such spectral systematics and demonstrate that even in a conservative scenario that does not perform removal of foregrounds, HERA will detect the EoR signal in line-of-sight k-modes, {k}\\parallel ≳ 0.2 h Mpc‑1, with high significance. Under these conditions, all baselines in a 19-element HERA layout are capable of detecting EoR over a substantial observing window on the sky.

  14. 2MTF III. H I 21 cm observations of 1194 spiral galaxies with the Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Masters, Karen L.; Crook, Aidan; Hong, Tao; Jarrett, T. H.; Koribalski, Bärbel S.; Macri, Lucas; Springob, Christopher M.; Staveley-Smith, Lister

    2014-09-01

    We present H I 21 cm observations of 1194 galaxies out to a redshift of 10 000 km s-1 selected as inclined spirals (i ≳ 60°) from the 2MASS redshift survey. These observations were carried out at the National Radio Astronomy Observatory Robert C. Byrd Green Bank Telescope (GBT). This observing programme is part of the 2MASS Tully-Fisher (2MTF) survey. This project will combine H I widths from these GBT observations with those from further dedicated observing at the Parkes Telescope, from the Arecibo Legacy Fast Arecibo L-band Feed Array survey at Arecibo, and S/N > 10 and spectral resolution vres < 10 km s-1 published widths from a variety of telescopes. We will use these H I widths along with 2MASS photometry to estimate Tully-Fisher distances to nearby spirals and investigate the peculiar velocity field of the local Universe. In this paper, we report on detections of neutral hydrogen in emission in 727 galaxies, and measure good signal to noise and symmetric H I global profiles suitable for use in the Tully-Fisher relation in 484.

  15. Calibration requirements for detecting the 21 cm epoch of reionization power spectrum and implications for the SKA

    NASA Astrophysics Data System (ADS)

    Barry, N.; Hazelton, B.; Sullivan, I.; Morales, M. F.; Pober, J. C.

    2016-09-01

    21 cm epoch of reionization (EoR) observations promise to transform our understanding of galaxy formation, but these observations are impossible without unprecedented levels of instrument calibration. We present end-to-end simulations of a full EoR power spectrum (PS) analysis including all of the major components of a real data processing pipeline: models of astrophysical foregrounds and EoR signal, frequency-dependent instrument effects, sky-based antenna calibration, and the full PS analysis. This study reveals that traditional sky-based per-frequency antenna calibration can only be implemented in EoR measurement analyses if the calibration model is unrealistically accurate. For reasonable levels of catalogue completeness, the calibration introduces contamination in otherwise foreground-free PS modes, precluding a PS measurement. We explore the origin of this contamination and potential mitigation techniques. We show that there is a strong joint constraint on the precision of the calibration catalogue and the inherent spectral smoothness of antennas, and that this has significant implications for the instrumental design of the SKA (Square Kilometre Array) and other future EoR observatories.

  16. Effects of Antenna Beam Chromaticity on Redshifted 21 cm Power Spectrum and Implications for Hydrogen Epoch of Reionization Array

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Parsons, Aaron R.; DeBoer, David R.; Bowman, Judd D.; Ewall-Wice, Aaron M.; Neben, Abraham R.; Patra, Nipanjana

    2016-07-01

    Unaccounted for systematics from foregrounds and instruments can severely limit the sensitivity of current experiments from detecting redshifted 21 cm signals from the Epoch of Reionization (EoR). Upcoming experiments are faced with a challenge to deliver more collecting area per antenna element without degrading the data with systematics. This paper and its companions show that dishes are viable for achieving this balance using the Hydrogen Epoch of Reionization Array (HERA) as an example. Here, we specifically identify spectral systematics associated with the antenna power pattern as a significant detriment to all EoR experiments which causes the already bright foreground power to leak well beyond ideal limits and contaminate the otherwise clean EoR signal modes. A primary source of this chromaticity is reflections in the antenna-feed assembly and between structures in neighboring antennas. Using precise foreground simulations taking wide-field effects into account, we provide a generic framework to set cosmologically motivated design specifications on these reflections to prevent further EoR signal degradation. We show that HERA will not be impeded by such spectral systematics and demonstrate that even in a conservative scenario that does not perform removal of foregrounds, HERA will detect the EoR signal in line-of-sight k-modes, {k}\\parallel ≳ 0.2 h Mpc-1, with high significance. Under these conditions, all baselines in a 19-element HERA layout are capable of detecting EoR over a substantial observing window on the sky.

  17. A Giant Metrewave Radio Telescope search for associated H I 21 cm absorption in high-redshift flat-spectrum sources

    NASA Astrophysics Data System (ADS)

    Aditya, J. N. H. S.; Kanekar, Nissim; Kurapati, Sushma

    2016-02-01

    We report results from a Giant Metrewave Radio Telescope search for `associated' redshifted H I 21 cm absorption from 24 active galactic nuclei (AGNs), at 1.1 < z < 3.6, selected from the Caltech-Jodrell Bank Flat-spectrum (CJF) sample. 22 out of 23 sources with usable data showed no evidence of absorption, with typical 3σ optical depth detection limits of ≈0.01 at a velocity resolution of ≈30 km s-1. A single tentative absorption detection was obtained at z ≈ 3.530 towards TXS 0604+728. If confirmed, this would be the highest redshift at which H I 21 cm absorption has ever been detected. Including 29 CJF sources with searches for redshifted H I 21 cm absorption in the literature, mostly at z < 1, we construct a sample of 52 uniformly selected flat-spectrum sources. A Peto-Prentice two-sample test for censored data finds (at ≈3σ significance) that the strength of H I 21 cm absorption is weaker in the high-z sample than in the low-z sample; this is the first statistically significant evidence for redshift evolution in the strength of H I 21 cm absorption in a uniformly selected AGN sample. However, the two-sample test also finds that the H I 21 cm absorption strength is higher in AGNs with low ultraviolet or radio luminosities, at ≈3.4σ significance. The fact that the higher luminosity AGNs of the sample typically lie at high redshifts implies that it is currently not possible to break the degeneracy between AGN luminosity and redshift evolution as the primary cause of the low H I 21 cm opacities in high-redshift, high-luminosity AGNs.

  18. A LANDSCAPE DEVELOPMENT INTENSITY MAP OF MARYLAND, USA - 4/07

    EPA Science Inventory

    We present a map of human development intensity for central and eastern Maryland using an index derived from energy systems principles. Brown and Vivas developed a measure of the intensity of human development based on the nonrenewable energy use per unit area as an index to exp...

  19. Mapping of laser diode radiation intensity by atomic-force microscopy

    NASA Astrophysics Data System (ADS)

    Alekseev, P. A.; Dunaevskii, M. S.; Slipchenko, S. O.; Podoskin, A. A.; Tarasov, I. S.

    2015-09-01

    The distribution of the intensity of laser diode radiation has been studied using an original method based on atomic-force microscopy (AFM). It is shown that the laser radiation intensity in both the near field and transition zone of a high-power semiconductor laser under room-temperature conditions can be mapped by AFM at a subwavelength resolution. The obtained patterns of radiation intensity distribution agree with the data of modeling and the results of near-field optical microscopy measurements.

  20. Mapping and analysing cropland use intensity from a NPP perspective

    NASA Astrophysics Data System (ADS)

    Niedertscheider, Maria; Kastner, Thomas; Fetzel, Tamara; Haberl, Helmut; Kroisleitner, Christine; Plutzar, Christoph; Erb, Karl-Heinz

    2016-01-01

    Meeting expected surges in global biomass demand while protecting pristine ecosystems likely requires intensification of current croplands. Yet many uncertainties relate to the potentials for cropland intensification, mainly because conceptualizing and measuring land use intensity is intricate, particularly at the global scale. We present a spatially explicit analysis of global cropland use intensity, following an ecological energy flow perspective. We analyze (a) changes of net primary production (NPP) from the potential system (i.e. assuming undisturbed vegetation) to croplands around 2000 and relate these changes to (b) inputs of (N) fertilizer and irrigation and (c) to biomass outputs, allowing for a three dimensional focus on intensification. Globally the actual NPP of croplands, expressed as per cent of their potential NPP (NPPact%), amounts to 77%. A mix of socio-economic and natural factors explains the high spatial variation which ranges from 22.6% to 416.0% within the inner 95 percentiles. NPPact% is well below NPPpot in many developing, (Sub-) Tropical regions, while it massively surpasses NPPpot on irrigated drylands and in many industrialized temperate regions. The interrelations of NPP losses (i.e. the difference between NPPact and NPPpot), agricultural inputs and biomass harvest differ substantially between biogeographical regions. Maintaining NPPpot was particularly N-intensive in forest biomes, as compared to cropland in natural grassland biomes. However, much higher levels of biomass harvest occur in forest biomes. We show that fertilization loads correlate with NPPact% linearly, but the relation gets increasingly blurred beyond a level of 125 kgN ha-1. Thus, large potentials exist to improve N-efficiency at the global scale, as only 10% of global croplands are above this level. Reallocating surplus N could substantially reduce NPP losses by up to 80% below current levels and at the same time increase biomass harvest by almost 30%. However, we

  1. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  2. The Effects of the Ionosphere on Ground-based Detection of the Global 21 cm Signal from the Cosmic Dawn and the Dark Ages

    NASA Astrophysics Data System (ADS)

    Datta, Abhirup; Bradley, Richard; Burns, Jack O.; Harker, Geraint; Komjathy, Attila; Lazio, T. Joseph W.

    2016-11-01

    Detection of the global H i 21 cm signal from the Cosmic Dawn and the Epoch of Reionization is the key science driver for several ongoing ground-based and future ground-/space-based experiments. The crucial spectral features in the global 21 cm signal (turning points) occur at low radio frequencies ≲ 100 {{MHz}}. In addition to the human-generated radio frequency interference, Earth’s ionosphere drastically corrupts low-frequency radio observations from the ground. In this paper, we examine the effects of time-varying ionospheric refraction, absorption, and thermal emission at these low radio frequencies and their combined effect on any ground-based global 21 cm experiment. It should be noted that this is the first study of the effect of a dynamic ionosphere on global 21 cm experiments. The fluctuations in the ionosphere are influenced by solar activity with flicker noise characteristics. The same characteristics are reflected in the ionospheric corruption to any radio signal passing through the ionosphere. As a result, any ground-based observations of the faint global 21 cm signal are corrupted by flicker noise (or 1/f noise, where f is the dynamical frequency) which scales as {ν }-2 (where ν is the frequency of radio observation) in the presence of a bright galactic foreground (\\propto {ν }-s, where s is the radio spectral index). Hence, the calibration of the ionosphere for any such experiment is critical. Any attempt to calibrate the ionospheric effects will be subject to the inaccuracies in the current ionospheric measurements using Global Positioning System (GPS) ionospheric measurements, riometer measurements, ionospheric soundings, etc. Even considering an optimistic improvement in the accuracy of GPS–total electron content measurements, we conclude that Earth’s ionosphere poses a significant challenge in the absolute detection of the global 21 cm signal below 100 MHz.

  3. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    PubMed Central

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  4. Updating Historical Maps of Malaria Transmission Intensity in East Africa Using Remote Sensing

    PubMed Central

    Omumbo, J.A.; Hay, S.I.; Goetz, S.J.; Snow, R.W.; Rogers, D.J.

    2013-01-01

    Remotely sensed imagery has been used to update and improve the spatial resolution of malaria transmission intensity maps in Tanzania, Uganda, and Kenya. Discriminant analysis achieved statistically robust agreements between historical maps of the intensity of malaria transmission and predictions based on multitemporal meteorological satellite sensor data processed using temporal Fourier analysis. The study identified land surface temperature as the best predictor of transmission intensity. Rainfall and moisture availability as inferred by cold cloud duration (ccd) and the normalized difference vegetation index (ndvi), respectively, were identified as secondary predictors of transmission intensity. Information on altitude derived from a digital elevation model significantly improved the predictions. “Malaria-free” areas were predicted with an accuracy of 96 percent while areas where transmission occurs only near water, moderate malaria areas, and intense malaria transmission areas were predicted with accuracies of 90 percent, 72 percent, and 87 percent, respectively. The importance of such maps for rationalizing malaria control is discussed, as is the potential contribution of the next generation of satellite sensors to these mapping efforts. PMID:23814324

  5. A FOURTH H I 21 cm ABSORPTION SYSTEM IN THE SIGHT LINE OF MG J0414+0534: A RECORD FOR INTERVENING ABSORBERS

    SciTech Connect

    Tanna, A.; Webb, J. K.; Curran, S. J.; Whiting, M. T.; Bignell, C.

    2013-08-01

    We report the detection of a strong H I 21 cm absorption system at z = 0.5344, as well as a candidate system at z = 0.3389, in the sight line toward the z = 2.64 quasar MG J0414+0534. This, in addition to the absorption at the host redshift and the other two intervening absorbers, takes the total to four (possibly five). The previous maximum number of 21 cm absorbers detected along a single sight line is two and so we suspect that this number of gas-rich absorbers is in some way related to the very red color of the background source. Despite this, no molecular gas (through OH absorption) has yet been detected at any of the 21 cm redshifts, although, from the population of 21 cm absorbers as a whole, there is evidence for a weak correlation between the atomic line strength and the optical-near-infrared color. In either case, the fact that so many gas-rich galaxies (likely to be damped Ly{alpha} absorption systems) have been found along a single sight line toward a highly obscured source may have far-reaching implications for the population of faint galaxies not detected in optical surveys, a possibility which could be addressed through future wide-field absorption line surveys with the Square Kilometer Array.

  6. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    SciTech Connect

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  7. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  8. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography.

    PubMed

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A; White, Thomas A; Spence, John C H; Chapman, Henry N; Barty, Anton

    2014-07-17

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure 'three-dimensional merging'. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies.

  9. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography

    PubMed Central

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A.; White, Thomas A.; Spence, John C. H.; Chapman, Henry N.; Barty, Anton

    2014-01-01

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure ‘three-dimensional merging’. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies. PMID:24914160

  10. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography.

    PubMed

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A; White, Thomas A; Spence, John C H; Chapman, Henry N; Barty, Anton

    2014-07-17

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure 'three-dimensional merging'. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies. PMID:24914160

  11. Infrared mapping of ultrasound fields generated by medical transducers: Feasibility of determining absolute intensity levels

    PubMed Central

    Khokhlova, Vera A.; Shmeleva, Svetlana M.; Gavrilov, Leonid R.; Martin, Eleanor; Sadhoo, Neelaksh; Shaw, Adam

    2013-01-01

    Considerable progress has been achieved in the use of infrared (IR) techniques for qualitative mapping of acoustic fields of high intensity focused ultrasound (HIFU) transducers. The authors have previously developed and demonstrated a method based on IR camera measurement of the temperature rise induced in an absorber less than 2 mm thick by ultrasonic bursts of less than 1 s duration. The goal of this paper was to make the method more quantitative and estimate the absolute intensity distributions by determining an overall calibration factor for the absorber and camera system. The implemented approach involved correlating the temperature rise measured in an absorber using an IR camera with the pressure distribution measured in water using a hydrophone. The measurements were conducted for two HIFU transducers and a flat physiotherapy transducer of 1 MHz frequency. Corresponding correction factors between the free field intensity and temperature were obtained and allowed the conversion of temperature images to intensity distributions. The system described here was able to map in good detail focused and unfocused ultrasound fields with sub-millimeter structure and with local time average intensity from below 0.1 W/cm2 to at least 50 W/cm2. Significantly higher intensities could be measured simply by reducing the duty cycle. PMID:23927199

  12. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  13. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  14. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  15. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  16. Foreground contamination in Lyα intensity mapping during the epoch of reionization

    SciTech Connect

    Gong, Yan; Cooray, Asantha; Silva, Marta; Santos, Mario G.

    2014-04-10

    The intensity mapping of Lyα emission during the epoch of reionization will be contaminated by foreground emission lines from lower redshifts. We calculate the mean intensity and the power spectrum of Lyα emission at z ∼ 7 and estimate the uncertainties according to the relevant astrophysical processes. We find that the low-redshift emission lines from 6563 Å Hα, 5007 Å [O III], and 3727 Å [O II] will be strong contaminants on the observed Lyα power spectrum. We make use of both the star formation rate and luminosity functions to estimate the mean intensity and power spectra of the three foreground lines at z ∼ 0.5 for Hα, z ∼ 0.9 for [O III], and z ∼ 1.6 for [O II], as they will contaminate the Lyα emission at z ∼ 7. The [O II] line is found to be the strongest. We analyze the masking of the bright survey pixels with a foreground line above some line intensity threshold as a way to reduce the contamination in an intensity mapping survey. We find that the foreground contamination can be neglected if we remove pixels with fluxes above 1.4 × 10{sup –20} W m{sup –2}.

  17. Recent Results from Broad-Band Intensity Mapping Measurements of Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael B.; CIBER, Herschel-SPIRE

    2016-01-01

    Intensity mapping integrates the total emission in a given spectral band over the universe's history. Tomographic measurements of cosmic structure can be performed using specific line tracers observed in narrow bands, but a wealth of information is also available from broad-band observations performed by instruments capable of capturing high-fidelity, wide-angle images of extragalactic emission. Sensitive to the continuum emission from faint and diffuse sources, these broad-band measurements provide a view on cosmic structure traced by components not readily detected in point source surveys. After accounting for measurement effects and astrophysical foregrounds, the angular power spectra of such data can be compared to predictions from models to yield powerful insights into the history of cosmic structure formation. This talk will highlight some recent measurements of large scale structure performed using broad-band intensity mapping methods that have given new insights on faint, distant, and diffuse components in the extragalactic background light.

  18. Managing Hardware Configurations and Data Products for the Canadian Hydrogen Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hincks, A. D.; Shaw, J. R.; Chime Collaboration

    2015-09-01

    The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is an ambitious new radio telescope project for measuring cosmic expansion and investigating dark energy. Keeping good records of both physical configuration of its 1280 antennas and their analogue signal chains as well as the ˜100 TB of data produced daily from its correlator will be essential to the success of CHIME. In these proceedings we describe the database-driven software we have developed to manage this complexity.

  19. Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly

    DOE PAGES

    Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; Church, Sarah E.

    2016-01-29

    Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less

  20. On using large scale correlation of the Ly-α forest and redshifted 21-cm signal to probe HI distribution during the post reionization era

    SciTech Connect

    Sarkar, Tapomoy Guha; Datta, Kanan K. E-mail: kanan.physics@presiuniv.ac.in

    2015-08-01

    We investigate the possibility of detecting the 3D cross correlation power spectrum of the Ly-α forest and HI 21 cm signal from the post reionization epoch. (The cross-correlation signal is directly dependent on the dark matter power spectrum and is sensitive to the 21-cm brightness temperature and Ly-α forest biases. These bias parameters dictate the strength of anisotropy in redshift space.) We find that the cross-correlation power spectrum can be detected using 400 hrs observation with SKA-mid (phase 1) and a futuristic BOSS like experiment with a quasar (QSO) density of 30 deg{sup −2} at a peak SNR of 15 for a single field experiment at redshift z = 2.5. on large scales using the linear bias model. We also study the possibility of constraining various bias parameters using the cross power spectrum. We find that with the same experiment 1 σ (conditional errors) on the 21-cm linear redshift space distortion parameter β{sub T} and β{sub F} corresponding to the Ly-α  forest are ∼ 2.7 % and ∼ 1.4 % respectively for 01 independent pointings of the SKA-mid (phase 1). This prediction indicates a significant improvement over existing measurements. We claim that the detection of the 3D cross correlation power spectrum will not only ascertain the cosmological origin of the signal in presence of astrophysical foregrounds but will also provide stringent constraints on large scale HI biases. This provides an independent probe towards understanding cosmological structure formation.

  1. New limits on 21 cm epoch of reionization from paper-32 consistent with an x-ray heated intergalactic medium at z = 7.7

    SciTech Connect

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, David H. E.; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2014-06-20

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK{sup 2}). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK){sup 2} for k = 0.27 h Mpc{sup –1} at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  2. H I SHELLS AND SUPERSHELLS IN THE I-GALFA H I 21 cm LINE SURVEY. I. FAST-EXPANDING H I SHELLS ASSOCIATED WITH SUPERNOVA REMNANTS

    SciTech Connect

    Park, G.; Koo, B.-C.; Gibson, S. J.; Newton, J. H.; Kang, J.-H.; Lane, D. C.; Douglas, K. A.; Peek, J. E. G.; Korpela, E. J.; Heiles, C.

    2013-11-01

    We search for fast-expanding H I shells associated with Galactic supernova remnants (SNRs) in the longitude range l ≈ 32° to 77° using 21 cm line data from the Inner-Galaxy Arecibo L-band Feed Array (I-GALFA) H I survey. Among the 39 known Galactic SNRs in this region, we find such H I shells in 4 SNRs: W44, G54.4-0.3, W51C, and CTB 80. All four were previously identified in low-resolution surveys, and three of those (excluding G54.4-0.3) were previously studied with the Arecibo telescope. A remarkable new result, however, is the detection of H I emission at both very high positive and negative velocities in W44 from the receding and approaching parts of the H I expanding shell, respectively. This is the first detection of both sides of an expanding shell associated with an SNR in H I 21 cm emission. The high-resolution I-GALFA survey data also reveal a prominent expanding H I shell with high circular symmetry associated with G54.4-0.3. We explore the physical characteristics of four SNRs and discuss what differentiates them from other SNRs in the survey area. We conclude that these four SNRs are likely the remnants of core-collapse supernovae interacting with a relatively dense (∼> 1 cm{sup –3}) ambient medium, and we discuss the visibility of SNRs in the H I 21 cm line.

  3. Mapping cropland-use intensity across Europe using MODIS NDVI time series

    NASA Astrophysics Data System (ADS)

    Estel, Stephan; Kuemmerle, Tobias; Levers, Christian; Baumann, Matthias; Hostert, Patrick

    2016-02-01

    Global agricultural production will likely need to increase in the future due to population growth, changing diets, and the rising importance of bioenergy. Intensifying already existing cropland is often considered more sustainable than converting more natural areas. Unfortunately, our understanding of cropping patterns and intensity is weak, especially at broad geographic scales. We characterized and mapped cropping systems in Europe, a region containing diverse cropping systems, using four indicators: (a) cropping frequency (number of cropped years), (b) multi-cropping (number of harvests per year), (c) fallow cycles, and (d) crop duration ratio (actual time under crops) based on the MODIS Normalized Difference Vegetation Index (NDVI) time series from 2000 to 2012. Second, we used these cropping indicators and self-organizing maps to identify typical cropping systems. The resulting six clusters correspond well with other indicators of agricultural intensity (e.g., nitrogen input, yields) and reveal substantial differences in cropping intensity across Europe. Cropping intensity was highest in Germany, Poland, and the eastern European Black Earth regions, characterized by high cropping frequency, multi-cropping and a high crop duration ratio. Contrarily, we found lowest cropping intensity in eastern Europe outside the Black Earth region, characterized by longer fallow cycles. Our approach highlights how satellite image time series can help to characterize spatial patterns in cropping intensity—information that is rarely surveyed on the ground and commonly not included in agricultural statistics: our clustering approach also shows a way forward to reduce complexity when measuring multiple indicators. The four cropping indicators we used could become part of continental-scale agricultural monitoring in order to identify target regions for sustainable intensification, where trade-offs between intensification and the environmental should be explored.

  4. Intensity distribution and isoseismal maps for the Nisqually, Washington, earthquake of 28 February 2001

    USGS Publications Warehouse

    Dewey, James W.; Hopper, Margaret G.; Wald, David J.; Quitoriano, Vincent; Adams, Elizabeth R.

    2002-01-01

    We present isoseismal maps, macroseismic intensities, and community summaries of damage for the MW=6.8 Nisqually, Washington, earthquake of 28 February, 2001. For many communities, two types of macroseismic intensity are assigned, the traditional U.S. Geological Survey Modified Mercalli Intensities (USGS MMI) and a type of intensity newly introduced with this paper, the USGS Reviewed Community Internet Intensity (RCII). For most communities, the RCII is a reviewed version of the Community Internet Intensity (CII) of Wald and others (1999). For some communities, RCII is assigned from such non-CII sources as press reports, engineering reports, and field reconnaissance observations. We summarize differences between procedures used to assign RCII and USGS MMI, and we show that the two types of intensity are nonetheless very similar for the Nisqually earthquake. We do not see evidence for systematic differences between RCII and USGS MMI that would approach one intensity unit, at any level of shaking, but we document a tendency for the RCII to be slightly lower than MMI in regions of low intensity and slightly higher than MMI in regions of high intensity. The highest RCII calculated for the Nisqually earthquake is 7.6, calculated for zip code 98134, which includes the ?south of downtown? (Sodo) area of Seattle and Harbor Island. By comparison, we assigned a traditional USGS MMI 8 to the Sodo area of Seattle. In all, RCII of 6.5 and higher were assigned to 58 zip-code regions. At the lowest intensities, the Nisqually earthquake was felt over an area of approximately 350,000 square km (approximately 135,000 square miles) in Washington, Oregon, Idaho, Montana, and southern British Columbia, Canada. On the basis of macroseismic effects, we infer that shaking in the southern Puget Sound region was somewhat less for the 2001 Nisqually earthquake than for the Puget Sound earthquake of April 13, 1949, which had nearly the same hypocenter and magnitude. Allowing for differences

  5. The high-redshift star formation history from carbon-monoxide intensity maps

    NASA Astrophysics Data System (ADS)

    Breysse, Patrick C.; Kovetz, Ely D.; Kamionkowski, Marc

    2016-03-01

    We demonstrate how cosmic star formation history can be measured with one-point statistics of carbon-monoxide intensity maps. Using a P(D) analysis, the luminosity function of CO-emitting sources can be inferred from the measured one-point intensity PDF. The star formation rate density (SFRD) can then be obtained, at several redshifts, from the CO luminosity density. We study the effects of instrumental noise, line foregrounds, and target redshift, and obtain constraints on the CO luminosity density of the order of 10 per cent. We show that the SFRD uncertainty is dominated by that of the model connecting CO luminosity and star formation. For pessimistic estimates of this model uncertainty, we obtain an error of the order of 50 per cent on SFRD for surveys targeting redshifts between two and seven with reasonable noise and foregrounds included. However, comparisons between intensity maps and galaxies could substantially reduce this model uncertainty. In this case, our constraints on SFRD at these redshifts improve to roughly 5 - 10 per cent, which is highly competitive with current measurements.

  6. Dynamic T2-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    NASA Astrophysics Data System (ADS)

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-01

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1°C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T2, since T2 increases linearly in fat during heating. T2-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T2. Calibration of T2-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T2 and temperature with a thermocouple. A positive T2 temperature dependence in bone marrow of 20 ms/°C was observed. Dynamic T2-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  7. Identification and mapping of the nursing diagnoses and actions in an Intensive Care Unit.

    PubMed

    Salgado, Patrícia de Oliveira; Chianca, Tânia Couto Machado

    2011-01-01

    This is a descriptive study with the aim of examining the nursing diagnoses labels and actions prescribed by nurses in the clinical records of patients hospitalized in an Adult Intensive Care Unit. A sample of 44 clinical records was obtained and a total of 1087 nursing diagnoses and 2260 nursing actions were identified. After exclusion of repetitions 28 different nursing diagnoses labels and 124 different nursing actions were found. Twenty-five nursing diagnoses labels were related to human psychobiological needs and three to psychosocial needs. All the nursing actions were mapped to the physiological needs and also to interventions of the Nursing Interventions Classification-NIC. Concordance of 100% was obtained between the experts in the validation process of the mapping performed, both for the nursing diagnoses labels and actions. Similar studies should be conducted for the identification and development of nursing diagnoses and actions.

  8. Coastal and estuarine habitat mapping, using LIDAR height and intensity and multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Chust, Guillem; Galparsoro, Ibon; Borja, Ángel; Franco, Javier; Uriarte, Adolfo

    2008-07-01

    The airborne laser scanning LIDAR (LIght Detection And Ranging) provides high-resolution Digital Terrain Models (DTM) that have been applied recently to the characterization, quantification and monitoring of coastal environments. This study assesses the contribution of LIDAR altimetry and intensity data, topographically-derived features (slope and aspect), and multi-spectral imagery (three visible and a near-infrared band), to map coastal habitats in the Bidasoa estuary and its adjacent coastal area (Basque Country, northern Spain). The performance of high-resolution data sources was individually and jointly tested, with the maximum likelihood algorithm classifier in a rocky shore and a wetland zone; thus, including some of the most extended Cantabrian Sea littoral habitats, within the Bay of Biscay. The results show that reliability of coastal habitat classification was more enhanced with LIDAR-based DTM, compared with the other data sources: slope, aspect, intensity or near-infrared band. The addition of the DTM, to the three visible bands, produced gains of between 10% and 27% in the agreement measures, between the mapped and validation data (i.e. mean producer's and user's accuracy) for the two test sites. Raw LIDAR intensity images are only of limited value here, since they appeared heterogeneous and speckled. However, the enhanced Lee smoothing filter, applied to the LIDAR intensity, improved the overall accuracy measurements of the habitat classification, especially in the wetland zone; here, there were gains up to 7.9% in mean producer's and 11.6% in mean user's accuracy. This suggests that LIDAR can be useful for habitat mapping, when few data sources are available. The synergy between the LIDAR data, with multi-spectral bands, produced high accurate classifications (mean producer's accuracy: 92% for the 16 rocky habitats and 88% for the 11 wetland habitats). Fusion of the data enabled discrimination of intertidal communities, such as Corallina elongata

  9. Expected constraints on models of the epoch of reionization with the variance and skewness in redshifted 21 cm-line fluctuations

    NASA Astrophysics Data System (ADS)

    Kubota, Kenji; Yoshiura, Shintaro; Shimabukuro, Hayato; Takahashi, Keitaro

    2016-08-01

    The redshifted 21 cm-line signal from neutral hydrogen in the intergalactic medium (IGM) gives a direct probe of the epoch of reionization (EoR). In this paper, we investigate the potential of the variance and skewness of the probability distribution function of the 21 cm brightness temperature for constraining EoR models. These statistical quantities are simple, easy to calculate from the observed visibility, and thus suitable for the early exploration of the EoR with current telescopes such as the Murchison Widefield Array (MWA) and LOw Frequency ARray (LOFAR). We show, by performing Fisher analysis, that the variance and skewness at z = 7-9 are complementary to each other to constrain the EoR model parameters such as the minimum virial temperature of halos which host luminous objects, ionizing efficiency, and mean free path of ionizing photons in the IGM. Quantitatively, the constraining power highly depends on the quality of the foreground subtraction and calibration. We give a best case estimate of the constraints on the parameters, neglecting the systematics other than the thermal noise.

  10. A GREEN BANK TELESCOPE SURVEY FOR H I 21 cm ABSORPTION IN THE DISKS AND HALOS OF LOW-REDSHIFT GALAXIES

    SciTech Connect

    Borthakur, Sanchayeeta; Tripp, Todd M.; Yun, Min S.; Meiring, Joseph D.; Bowen, David V.; York, Donald G.; Momjian, Emmanuel

    2011-01-20

    We present an H I 21 cm absorption survey with the Green Bank Telescope (GBT) of galaxy-quasar pairs selected by combining galaxy data from the Sloan Digital Sky Survey (SDSS) and radio sources from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. Our sample consists of 23 sight lines through 15 low-redshift foreground galaxy-background quasar pairs with impact parameters ranging from 1.7 kpc up to 86.7 kpc. We detected one absorber in the GBT survey from the foreground dwarf galaxy, GQ1042+0747, at an impact parameter of 1.7 kpc and another possible absorber in our follow-up Very Large Array (VLA) imaging of the nearby foreground galaxy UGC 7408. The line widths of both absorbers are narrow (FWHM of 3.6 and 4.8km s{sup -1}). The absorbers have sub-damped Ly{alpha} column densities, and most likely originate in the disk gas of the foreground galaxies. We also detected H I emission from three foreground galaxies including UGC 7408. Although our sample contains both blue and red galaxies, the two H I absorbers as well as the H I emissions are associated with blue galaxies. We discuss the physical conditions in the 21 cm absorbers and some drawbacks of the large GBT beam for this type of survey.

  11. Dual gradients of light intensity and nutrient concentration for full-factorial mapping of photosynthetic productivity.

    PubMed

    Nguyen, Brian; Graham, Percival J; Sinton, David

    2016-08-01

    Optimizing bioproduct generation from microalgae is complicated by the myriad of coupled parameters affecting photosynthetic productivity. Quantifying the effect of multiple coupled parameters in full-factorial fashion requires a prohibitively high number of experiments. We present a simple hydrogel-based platform for the rapid, full-factorial mapping of light and nutrient availability on the growth and lipid accumulation of microalgae. We accomplish this without microfabrication using thin sheets of cell-laden hydrogels. By immobilizing the algae in a hydrogel matrix we are able to take full advantage of the continuous spatial chemical gradient produced by a diffusion-based gradient generator while eliminating the need for chambers. We map the effect of light intensities between 0 μmol m(-2) s(-1) and 130 μmol m(-2) s(-1) (∼28 W m(-2)) coupled with ammonium concentrations between 0 mM and 7 mM on Chlamydomonas reinhardtii. Our data set, verified with bulk experiments, clarifies the role of ammonium availability on the photosynthetic productivity Chlamydomonas reinhardtii, demonstrating the dependence of ammonium inhibition on light intensity. Specifically, a sharp optimal growth peak emerges at approximately 2 mM only for light intensities between 80 and 100 μmol m(-2) s(-1)- suggesting that ammonium inhibition is insignificant at lower light intensities. We speculate that this phenomenon is due to the regulation of the high affinity ammonium transport system in Chlamydomonas reinhardtii as well as free ammonia toxicity. The complexity of this photosynthetic biological response highlights the importance of full-factorial data sets as enabled here. PMID:27364571

  12. 3D leaf water content mapping using terrestrial laser scanner backscatter intensity with radiometric correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Wang, Tiejun; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Niemann, K. Olaf

    2015-12-01

    Leaf water content (LWC) plays an important role in agriculture and forestry management. It can be used to assess drought conditions and wildfire susceptibility. Terrestrial laser scanner (TLS) data have been widely used in forested environments for retrieving geometrically-based biophysical parameters. Recent studies have also shown the potential of using radiometric information (backscatter intensity) for estimating LWC. However, the usefulness of backscatter intensity data has been limited by leaf surface characteristics, and incidence angle effects. To explore the idea of using LiDAR intensity data to assess LWC we normalized (for both angular effects and leaf surface properties) shortwave infrared TLS data (1550 nm). A reflectance model describing both diffuse and specular reflectance was applied to remove strong specular backscatter intensity at a perpendicular angle. Leaves with different surface properties were collected from eight broadleaf plant species for modeling the relationship between LWC and backscatter intensity. Reference reflectors (Spectralon from Labsphere, Inc.) were used to build a look-up table to compensate for incidence angle effects. Results showed that before removing the specular influences, there was no significant correlation (R2 = 0.01, P > 0.05) between the backscatter intensity at a perpendicular angle and LWC. After the removal of the specular influences, a significant correlation emerged (R2 = 0.74, P < 0.05). The agreement between measured and TLS-derived LWC demonstrated a significant reduction of RMSE (root mean square error, from 0.008 to 0.003 g/cm2) after correcting for the incidence angle effect. We show that it is possible to use TLS to estimate LWC for selected broadleaved plants with an R2 of 0.76 (significance level α = 0.05) at leaf level. Further investigations of leaf surface and internal structure will likely result in improvements of 3D LWC mapping for studying physiology and ecology in vegetation.

  13. Squidpops: A Simple Tool to Crowdsource a Global Map of Marine Predation Intensity

    PubMed Central

    Duffy, J. Emmett; Ziegler, Shelby L.; Campbell, Justin E.; Bippus, Paige M.; Lefcheck, Jonathan S.

    2015-01-01

    We present a simple, standardized assay, the squidpop, for measuring the relative feeding intensity of generalist predators in aquatic systems. The assay consists of a 1.3-cm diameter disk of dried squid mantle tethered to a rod, which is either inserted in the sediment in soft-bottom habitats or secured to existing structure. Each replicate squidpop is scored as present or absent after 1 and 24 hours, and the data for analysis are proportions of replicate units consumed at each time. Tests in several habitats of the temperate southeastern USA (Virginia and North Carolina) and tropical Central America (Belize) confirmed the assay’s utility for measuring variation in predation intensity among habitats, among seasons, and along environmental gradients. In Belize, predation intensity varied strongly among habitats, with reef > seagrass = mangrove > unvegetated bare sand. Quantitative visual surveys confirmed that assayed feeding intensity increased with abundance and species richness of fishes across sites, with fish abundance and richness explaining up to 45% and 70% of the variation in bait loss respectively. In the southeastern USA, predation intensity varied seasonally, being highest during summer and declining in late autumn. Deployments in marsh habitats generally revealed a decline in mean predation intensity from fully marine to tidal freshwater sites. The simplicity, economy, and standardization of the squidpop assay should facilitate engagement of scientists and citizens alike, with the goal of constructing high-resolution maps of how top-down control varies through space and time in aquatic ecosystems, and addressing a broad array of long-standing hypotheses in macro- and community ecology. PMID:26599815

  14. Squidpops: A Simple Tool to Crowdsource a Global Map of Marine Predation Intensity.

    PubMed

    Duffy, J Emmett; Ziegler, Shelby L; Campbell, Justin E; Bippus, Paige M; Lefcheck, Jonathan S

    2015-01-01

    We present a simple, standardized assay, the squidpop, for measuring the relative feeding intensity of generalist predators in aquatic systems. The assay consists of a 1.3-cm diameter disk of dried squid mantle tethered to a rod, which is either inserted in the sediment in soft-bottom habitats or secured to existing structure. Each replicate squidpop is scored as present or absent after 1 and 24 hours, and the data for analysis are proportions of replicate units consumed at each time. Tests in several habitats of the temperate southeastern USA (Virginia and North Carolina) and tropical Central America (Belize) confirmed the assay's utility for measuring variation in predation intensity among habitats, among seasons, and along environmental gradients. In Belize, predation intensity varied strongly among habitats, with reef > seagrass = mangrove > unvegetated bare sand. Quantitative visual surveys confirmed that assayed feeding intensity increased with abundance and species richness of fishes across sites, with fish abundance and richness explaining up to 45% and 70% of the variation in bait loss respectively. In the southeastern USA, predation intensity varied seasonally, being highest during summer and declining in late autumn. Deployments in marsh habitats generally revealed a decline in mean predation intensity from fully marine to tidal freshwater sites. The simplicity, economy, and standardization of the squidpop assay should facilitate engagement of scientists and citizens alike, with the goal of constructing high-resolution maps of how top-down control varies through space and time in aquatic ecosystems, and addressing a broad array of long-standing hypotheses in macro- and community ecology. PMID:26599815

  15. Neural maps of interaural time and intensity differences in the optic tectum of the barn owl.

    PubMed

    Olsen, J F; Knudsen, E I; Esterly, S D

    1989-07-01

    This report describes the binaural basis of the auditory space map in the optic tectum of the barn owl (Tyto alba). Single units were recorded extracellularly in ketamine-anesthetized birds. Unit tuning for interaural differences in timing and intensity of wideband noise was measured using digitally synthesized sound presented through earphones. Spatial receptive fields of the same units were measured with a free field sound source. Auditory units in the optic tectum are sharply tuned for both the azimuth and the elevation of a free field sound source. To determine the binaural cues that could be responsible for this spatial tuning, we measured in the ear canals the amplitude and phase spectra produced by a free field noise source and calculated from these measurements the interaural differences in time and intensity associated with each of 178 locations throughout the frontal hemisphere. For all frequencies, interaural time differences (ITDs) varied systematically and most strongly with source azimuth. The pattern of variation of interaural intensity differences (IIDs) depended on frequency. For low frequencies (below 4 kHz) IID varied primarily with source azimuth, whereas for high frequencies (above 5 kHz) IID varied primarily with source elevation. Tectal units were tuned for interaural differences in both time and intensity of dichotic stimuli. Changing either parameter away from the best value for the unit decreased the unit's response. The tuning of units to either parameter was sharp: the width of ITD tuning curves, measured at 50% of the maximum response with IID held constant (50% tuning width), ranged from 18 to 82 microsecs. The 50% tuning widths of IID tuning curves, measured with ITD held constant, ranged from 8 to 37 dB. For most units, tuning for ITD was largely independent of IID, and vice versa. A few units exhibited systematic shifts of the best ITD with changes in IID (or shifts of the best IID with changes in ITD); for these units, a change in

  16. Intensity mapping cross-correlations: connecting the largest scales to galaxy evolution

    NASA Astrophysics Data System (ADS)

    Wolz, L.; Tonini, C.; Blake, C.; Wyithe, J. S. B.

    2016-05-01

    Intensity mapping of the neutral hydrogen (H I) is a new observational tool to efficiently map the large-scale structure over wide redshift ranges. The cross-correlation of intensity maps with galaxy surveys is a robust measure of the cosmological power spectrum and the H I content of galaxies which diminishes systematics caused by instrumental effects and foreground removal. We examine the cross-correlation signature at redshift 0.9 using a semi-analytical galaxy formation model in order to model the H I gas of galaxies as well as their optical magnitudes. We determine the scale-dependent clustering of the cross-correlation power for different types of galaxies determined by their colours, which act as a proxy for their star formation activity. We find that the cross-correlation coefficient with H I density for red quiescent galaxies falls off more quickly on smaller scales k > 0.2 h Mpc-1 than for blue star-forming galaxies. Additionally, we create a mock catalogue of highly star-forming galaxies to mimic the WiggleZ Dark Energy Survey, and use this to predict existing and future measurements using data from the Green Bank telescope and Parkes telescope. We find that the cross-power of highly star-forming galaxies shows a higher clustering on small scales than any other galaxy type and that this significantly alters the power spectrum shape on scales k > 0.2 h Mpc-1. We show that the cross-correlation coefficient is not negligible when interpreting the cosmological cross-power spectrum and additionally contains information about the H I content of the optically selected galaxies.

  17. When intensions do not map onto extensions: Individual differences in conceptualization.

    PubMed

    Hampton, James A; Passanisi, Alessia

    2016-04-01

    Concepts are represented in the mind through knowledge of their extensions (the class of items to which the concept applies) and intensions (features that distinguish that class of items). A common assumption among theories of concepts is that the 2 aspects are intimately related. Hence if there is systematic individual variation in concept representation, the variation should correlate between extensional and intensional measures. A pair of individuals with similar extensional beliefs about a given concept should also share similar intensional beliefs. To test this notion, exemplars (extensions) and features (intensions) of common categories were rated for typicality and importance respectively across 2 occasions. Within-subject consistency was greater than between-subjects consensus on each task, providing evidence for systematic individual variation. Furthermore, the similarity structure between individuals for each task was stable across occasions. However, across 5 samples, similarity between individuals for extensional judgments did not map onto similarity between individuals for intensional judgments. The results challenge the assumption common to many theories of conceptual representation that intensions determine extensions and support a hybrid view of concepts where there is a disconnection between the conceptual resources that are used for the 2 tasks. PMID:26551627

  18. Measuring galaxy clustering and the evolution of [C II] mean intensity with far-IR line intensity mapping during 0.5 < z < 1.5

    SciTech Connect

    Uzgil, B. D.; Aguirre, J. E.; Lidz, A.; Bradford, C. M.

    2014-10-01

    Infrared fine-structure emission lines from trace metals are powerful diagnostics of the interstellar medium in galaxies. We explore the possibility of studying the redshifted far-IR fine-structure line emission using the three-dimensional (3D) power spectra obtained with an imaging spectrometer. The intensity mapping approach measures the spatio-spectral fluctuations due to line emission from all galaxies, including those below the individual detection threshold. The technique provides 3D measurements of galaxy clustering and moments of the galaxy luminosity function. Furthermore, the linear portion of the power spectrum can be used to measure the total line emission intensity including all sources through cosmic time with redshift information naturally encoded. Total line emission, when compared to the total star formation activity and/or other line intensities, reveals evolution of the interstellar conditions of galaxies in aggregate. As a case study, we consider measurement of [C II] autocorrelation in the 0.5 < z < 1.5 epoch, where interloper lines are minimized, using far-IR/submillimeter balloon-borne and future space-borne instruments with moderate and high sensitivity, respectively. In this context, we compare the intensity mapping approach to blind galaxy surveys based on individual detections. We find that intensity mapping is nearly always the best way to obtain the total line emission because blind, wide-field galaxy surveys lack sufficient depth and deep pencil beams do not observe enough galaxies in the requisite luminosity and redshift bins. Also, intensity mapping is often the most efficient way to measure the power spectrum shape, depending on the details of the luminosity function and the telescope aperture.

  19. Dynamics of Hollow Atom Formation in Intense X-Ray Pulses Probed by Partial Covariance Mapping

    NASA Astrophysics Data System (ADS)

    Frasinski, L. J.; Zhaunerchyk, V.; Mucke, M.; Squibb, R. J.; Siano, M.; Eland, J. H. D.; Linusson, P.; v. d. Meulen, P.; Salén, P.; Thomas, R. D.; Larsson, M.; Foucar, L.; Ullrich, J.; Motomura, K.; Mondal, S.; Ueda, K.; Osipov, T.; Fang, L.; Murphy, B. F.; Berrah, N.; Bostedt, C.; Bozek, J. D.; Schorb, S.; Messerschmidt, M.; Glownia, J. M.; Cryan, J. P.; Coffee, R. N.; Takahashi, O.; Wada, S.; Piancastelli, M. N.; Richter, R.; Prince, K. C.; Feifel, R.

    2013-08-01

    When exposed to ultraintense x-radiation sources such as free electron lasers (FELs) the innermost electronic shell can efficiently be emptied, creating a transient hollow atom or molecule. Understanding the femtosecond dynamics of such systems is fundamental to achieving atomic resolution in flash diffraction imaging of noncrystallized complex biological samples. We demonstrate the capacity of a correlation method called “partial covariance mapping” to probe the electron dynamics of neon atoms exposed to intense 8 fs pulses of 1062 eV photons. A complete picture of ionization processes competing in hollow atom formation and decay is visualized with unprecedented ease and the map reveals hitherto unobserved nonlinear sequences of photoionization and Auger events. The technique is particularly well suited to the high counting rate inherent in FEL experiments.

  20. Comparing USGS national seismic hazard maps with internet-based macroseismic intensity observations

    NASA Astrophysics Data System (ADS)

    Mak, Sum; Schorlemmer, Danijel

    2016-04-01

    Verifying a nationwide seismic hazard assessment using data collected after the assessment has been made (i.e., prospective data) is a direct consistency check of the assessment. We directly compared the predicted rate of ground motion exceedance by the four available versions of the USGS national seismic hazard map (NSHMP, 1996, 2002, 2008, 2014) with the actual observed rate during 2000-2013. The data were prospective to the two earlier versions of NSHMP. We used two sets of somewhat independent data, namely 1) the USGS "Did You Feel It?" (DYFI) intensity reports, 2) instrumental ground motion records extracted from ShakeMap stations. Although both are observed data, they come in different degrees of accuracy. Our results indicated that for California, the predicted and observed hazards were very comparable. The two sets of data gave consistent results, implying robustness. The consistency also encourages the use of DYFI data for hazard verification in the Central and Eastern US (CEUS), where instrumental records are lacking. The result showed that the observed ground-motion exceedance was also consistent with the predicted in CEUS. The primary value of this study is to demonstrate the usefulness of DYFI data, originally designed for community communication instead of scientific analysis, for the purpose of hazard verification.

  1. Intensity-modulated radiosurgery treatment planning by fluence mapping optimized multi-isocenter plans

    NASA Astrophysics Data System (ADS)

    St. John, Theodore Jeffrey

    Stereotactic radiosurgery (SRS) is a non-invasive surgical technique of using a high intensity beam of x rays to obliterate intracranial lesions. The multiple-isocenter, circular-collimator, arc technique has been used successfully at the University of Florida since the inception of their radiosurgery program in 1988. This technique has been shown to produce highly conformal radiation dose distributions with steep dose gradients, which are key factors in delivering high dose to the tumor and low dose to surrounding healthy tissue. However, the time required to deliver the treatment to a complex target requiring many isocenters may exceed several hours. In this investigation, a unique method of intensity modulation that approximates the fluence map produced by the multiple-isocenter arc technique is employed. An algorithm was created that reads the dosimetry file from the multiple-isocenter treatment plan, segments each arc into a set of static beams and combines all of the beams from a set of table and gantry angles so that they can be delivered using a miniature multi-leaf collimator (mMLC). The mMLC shapes each beam, in such a way as to closely approximate the original dose distribution, alleviating the need to reposition the patient or manually change the collimator for each isocenter. The purpose of this research is to determine how well a mMLC, which has a set number of leaves with finite leaf widths, can approximate the dose distribution produced by the standard circular collimator, arc technique. The investigation begins with a study of how the dose distribution is changed by using a set of static beams in place of arcs, followed by a study of the effect of MLC leaf width and the development and application of the experimental fluence-mapped MLC treatment technique. The development and testing of the fluence-mapping algorithm, a dosimetry program, and several graphicaluser-interface tools are described. These tools were used to calculate and compare the dose

  2. The use of multibeam backscatter intensity data as a tool for mapping glacial deposits in the Central North Sea, UK

    NASA Astrophysics Data System (ADS)

    Stewart, Heather; Bradwell, Tom

    2014-05-01

    Multibeam backscatter intensity data acquired offshore eastern Scotland and north-eastern England have been used to map drumlin fields, large arcuate moraine ridges, smaller scale moraine ridges, and incised channels on the sea floor. The study area includes the catchments of the previously proposed, but only partly mapped, Strathmore, Forth-Tay, and Tweed palaeo-ice streams. The ice sheet glacial landsystem is extremely well preserved on the sea bed and comprehensive mapping of the seafloor geomorphology has been undertaken. The authors demonstrate the value in utilising not only digital terrain models (both NEXTMap and multibeam bathymetry derived) in undertaking geomorphological mapping, but also examining the backscatter intensity data that is often overlooked. Backscatter intensity maps were generated using FM Geocoder by the British Geological Survey. FM Geocoder corrects the backscatter intensities registered by the multibeam echosounder system, and then geometrically corrects and positions each acoustic sample in a backscatter mosaic. The backscatter intensity data were gridded at the best resolution per dataset (between 2 and 5 m). The strength of the backscattering is dependent upon sediment type, grain size, survey conditions, sea-bed roughness, compaction and slope. A combination of manual interpretation and semi-automated classification of the backscatter intensity data (a predictive method for mapping variations in surficial sea-bed sediments) has been undertaken in the study area. The combination of the two methodologies has produced a robust glacial geomorphological map for the study area. Four separate drumlin fields have been mapped in the study area indicative of fast-flowing and persistent ice-sheet flow configurations. A number of individual drumlins are also identified located outside the fields. The drumlins show as areas of high backscatter intensity compared to the surrounding sea bed, indicating the drumlins comprise mixed sediments of

  3. Analytical source-target mapping method for the design of freeform mirrors generating prescribed 2D intensity distributions.

    PubMed

    Doskolovich, Leonid L; Bezus, Evgeni A; Moiseev, Mikhail A; Bykov, Dmitry A; Kazanskiy, Nikolay L

    2016-05-16

    A new source-target mapping for the design of mirrors generating prescribed 2D intensity distributions is proposed. The surface of the mirror implementing the obtained mapping is expressed in an analytical form. Presented simulation results demonstrate high performance of the proposed method. In the case of generation of rectangular and elliptical intensity distributions with angular dimensions from 80° x 20° to 40° x 20°, relative standard error does not exceed 8.5%. The method can be extended to the calculation of refractive optical elements.

  4. Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2016-08-01

    We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100. The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} mathrm {W}/mathrm {Hz}^{1/2}. This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.

  5. Clustering of quintessence on horizon scales and its imprint on HI intensity mapping

    SciTech Connect

    Duniya, Didam G.A.; Bertacca, Daniele; Maartens, Roy E-mail: daniele.bertacca@gmail.com

    2013-10-01

    Quintessence can cluster only on horizon scales. What is the effect on the observed matter distribution? To answer this, we need a relativistic approach that goes beyond the standard Newtonian calculation and deals properly with large scales. Such an approach has recently been developed for the case when dark energy is vacuum energy, which does not cluster at all. We extend this relativistic analysis to deal with dynamical dark energy. Using three quintessence potentials as examples, we compute the angular power spectrum for the case of an HI intensity map survey. Compared to the concordance model with the same small-scale power at z = 0, quintessence boosts the angular power by up to ∼ 15% at high redshifts, while power in the two models converges at low redshifts. The difference is mainly due to the background evolution, driven mostly by the normalization of the power spectrum today. The dark energy perturbations make only a small contribution on the largest scales, and a negligible contribution on smaller scales. Ironically, the dark energy perturbations remove the false boost of large-scale power that arises if we impose the (unphysical) assumption that the dark energy is smooth.

  6. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, Alan M.; Edwards, William R.

    1983-01-01

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  7. Detection of magnetic field intensity gradient by homing pigeons (Columba livia) in a novel "virtual magnetic map" conditioning paradigm.

    PubMed

    Mora, Cordula V; Bingman, Verner P

    2013-01-01

    It has long been thought that birds may use the Earth's magnetic field not only as a compass for direction finding, but that it could also provide spatial information for position determination analogous to a map during navigation. Since magnetic field intensity varies systematically with latitude and theoretically could also provide longitudinal information during position determination, birds using a magnetic map should be able to discriminate magnetic field intensity cues in the laboratory. Here we demonstrate a novel behavioural paradigm requiring homing pigeons to identify the direction of a magnetic field intensity gradient in a "virtual magnetic map" during a spatial conditioning task. Not only were the pigeons able to detect the direction of the intensity gradient, but they were even able to discriminate upward versus downward movement on the gradient by differentiating between increasing and decreasing intensity values. Furthermore, the pigeons typically spent more than half of the 15 second sampling period in front of the feeder associated with the rewarded gradient direction indicating that they required only several seconds to make the correct choice. Our results therefore demonstrate for the first time that pigeons not only can detect the presence and absence of magnetic anomalies, as previous studies had shown, but are even able to detect and respond to changes in magnetic field intensity alone, including the directionality of such changes, in the context of spatial orientation within an experimental arena. This opens up the possibility for systematic and detailed studies of how pigeons could use magnetic intensity cues during position determination as well as how intensity is perceived and where it is processed in the brain. PMID:24039812

  8. Five Years of Citizen Science: Macroseismic Data Collection with the USGS Community Internet Intensity Maps (``Did You Feel It?'')

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Dewey, J. W.; Hopper, M.; Tarr, A.

    2003-12-01

    The U.S. Geological Survey Community Internet Intensity Map (CIIM) is an automatic Web-based system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. The data collection procedure is fundamentally Citizen Science. The vast majority of data are contributed by non-specialists, describing their own experiences of earthquakes. Internet data contributed by the public have profoundly changed the approach, coverage and usefulness of intensity observation in the U.S. We now typically receive thousands of individual questionnaire responses for widely felt earthquakes. After five years, these total over 350,000 individual entries nationwide, including entries from all 50 States, the District of Columbia, as well as territories of Guam, the Virgin Islands and Puerto Rico. The widespread access and use of online felt reports have added unanticipated but welcome capacities to USGS earthquake reporting. We can more easily validate earthquake occurrence in poorly instrumented regions, identify and locate sonic booms, and readily gauge societal importance of earthquakes by the nature of the response. In some parts of the U.S., CIIM provides constraints on earthquake magnitudes and focal depths beyond those provided by instrumental data, and the data are robust enough to test regionalized models of ground-motion attenuation. CIIM invokes an enthusiastic response from members of the public who contribute to it; it clearly provides an important opportunity for public education and outreach. In this paper we provide background on advantages and limitations of on-line data collection and explore recent developments and improvements to the CIIM system, including improved quality assurance using a relational database and greater data availability for scientific and sociological studies. We also describe a number of post-processing tools and applications that make use

  9. Mapping the spatial patterns of field traffic and traffic intensity to predict soil compaction risks at the field scale

    NASA Astrophysics Data System (ADS)

    Duttmann, Rainer; Kuhwald, Michael; Nolde, Michael

    2015-04-01

    Soil compaction is one of the main threats to cropland soils in present days. In contrast to easily visible phenomena of soil degradation, soil compaction, however, is obscured by other signals such as reduced crop yield, delayed crop growth, and the ponding of water, which makes it difficult to recognize and locate areas impacted by soil compaction directly. Although it is known that trafficking intensity is a key factor for soil compaction, until today only modest work has been concerned with the mapping of the spatially distributed patterns of field traffic and with the visual representation of the loads and pressures applied by farm traffic within single fields. A promising method for for spatial detection and mapping of soil compaction risks of individual fields is to process dGPS data, collected from vehicle-mounted GPS receivers and to compare the soil stress induced by farm machinery to the load bearing capacity derived from given soil map data. The application of position-based machinery data enables the mapping of vehicle movements over time as well as the assessment of trafficking intensity. It also facilitates the calculation of the trafficked area and the modeling of the loads and pressures applied to soil by individual vehicles. This paper focuses on the modeling and mapping of the spatial patterns of traffic intensity in silage maize fields during harvest, considering the spatio-temporal changes in wheel load and ground contact pressure along the loading sections. In addition to scenarios calculated for varying mechanical soil strengths, an example for visualizing the three-dimensional stress propagation inside the soil will be given, using the Visualization Toolkit (VTK) to construct 2D or 3D maps supporting to decision making due to sustainable field traffic management.

  10. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1982-03-23

    A long-lifetime light source is discussed with sufficiently low intensity to be used for reading a map or other writing at nightime, while not obscuring the user's normal night vision. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode.

  11. Long lifetime, low intensity light source for use in nighttime viewing of equipment maps and other writings

    DOEpatents

    Frank, A.M.; Edwards, W.R.

    1983-10-11

    A long-lifetime light source with sufficiently low intensity to be used for reading a map or other writing at nighttime, while not obscuring the user's normal night vision is disclosed. This light source includes a diode electrically connected in series with a small power source and a lens properly positioned to focus at least a portion of the light produced by the diode. 1 fig.

  12. SRS 2010 Vegetation Inventory GeoStatistical Mapping Results for Custom Reaction Intensity and Total Dead Fuels.

    SciTech Connect

    Edwards, Lloyd A.; Paresol, Bernard

    2014-09-01

    This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).

  13. PHASE RETRIEVAL, SYMMETRIZATION RULE AND TRANSPORT OF INTENSITY EQUATION IN APPLICATION TO INDUCTION MAPPING OF MAGNETIC MATERIALS.

    SciTech Connect

    VOLKOV,V.V.; ZHU,Y.

    2002-08-04

    Recent progress in the field of noninterferometric phase retrieval brings the ordinary Fresnel microscopy to a new quantitative level, suitable for recovering both the amplitude and phase of the object, based on image intensity measurements of the object. We show that this is sufficient for in-plane component mapping of magnetic induction for small magnetic elements with known geometry ranging from micro- to few nanometers size. In present paper we re-examine some conservation principles used for the transport-of-intensity (TIE) equation derived by Teaque for application to phase retrieval in light and X-ray optics. In particular, we prove that the intensity conservation law should be replaced in general case with the energy-flow conservation law. This law describes the amplitude-phase balance of the partially coherent beam on its propagation along the optical path, valid both for light and electron optics. This substitution has at least two important fundamental consequences.

  14. Mapping

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1978-01-01

    Geologic mapping in the United States increased by about one-quarter in the past year. Examinations of mapping trends were in the following categories: (1) Mapping at scales of 1:100, 000; (2) Metric-scale base maps; (3) International mapping, and (4) Planetary mapping. (MA)

  15. First dose-map measured with a polycrystalline diamond 2D dosimeter under an intensity modulated radiotherapy beam

    NASA Astrophysics Data System (ADS)

    Scaringella, M.; Zani, M.; Baldi, A.; Bucciolini, M.; Pace, E.; de Sio, A.; Talamonti, C.; Bruzzi, M.

    2015-10-01

    A prototype of bidimensional dosimeter made on a 2.5×2.5 cm2 active area polycrystalline Chemical Vapour Deposited (pCVD) diamond film, equipped with a matrix of 12×12 contacts connected to the read-out electronics, has been used to evaluate a map of dose under Intensity Modulated Radiation Therapy (IMRT) fields for a possible application in pre-treatment verifications of cancer treatments. Tests have been performed under a 6-10 MVRX beams with IMRT fields for prostate and breast cancer. Measurements have been taken by measuring the 144 pixels in different positions, obtained by shifting the device along the x/y axes to span a total map of 14.4×10 cm2. Results show that absorbed doses measured by our pCVD diamond device are consistent with those calculated by the Treatment Planning System (TPS).

  16. When Intensions Do Not Map onto Extensions: Individual Differences in Conceptualization

    ERIC Educational Resources Information Center

    Hampton, James A.; Passanisi, Alessia

    2016-01-01

    Concepts are represented in the mind through knowledge of their extensions (the class of items to which the concept applies) and intensions (features that distinguish that class of items). A common assumption among theories of concepts is that the 2 aspects are intimately related. Hence if there is systematic individual variation in concept…

  17. Imaging-intensive guidance with confirmatory physiological mapping for neurosurgery of movement disorders

    NASA Astrophysics Data System (ADS)

    Nauta, Haring J.; Bonnen, J. G.; Soukup, V. M.; Gonzalez, A.; Schiess, Mya C.

    1998-06-01

    Stereotactic surgery for movement disorders is typically performed using both imaging and physiologic guidance. However, different neurosurgical centers vary in the emphasis placed on either the imaging or the physiological mapping used to locate the target in the brain. The relative ease with which imaging data is acquired currently and the relative complexity and invasiveness associated with physiologic mapping prompted an evaluation of a method that seeks to maximize the imaging component of the guidance in order to minimize the need for the physiologic mapping. The evaluation was carried out in 37 consecutive stereotactic procedures for movement disorders in 28 patients. Imaging was performed with the patients in a stereotactic head frame. Imaging data from MRI in three planes, CT and positive contrast ventriculography was all referenced to this headframe and combined in a stereotactic planning computer. Physiologic definition of the target was performed by macroelectrode stimulation. Any discrepancy between the coordinates of the imaging predicted target and physiologically defined target was measured. The imaging- predicted target coordinates allowed the physiologically defined target to be reached on the first electrode penetration in 70% of procedures and within two penetrations in 92%. The mean error between imaging predicted and physiologically defined target position was 1.24 mm. Lesion location was confirmed by postoperative MRI. There were no permanent complications in this series. Functional outcomes were comparable to those achieved by centers mapping with multiple microelectrode penetrations. The findings suggest that while physiologic guidance remains necessary, the extent to which it is needed can be reduced by acquiring as much imaging information as possible in the initial stages of the procedure. These data can be combined and prioritized in a stereotactic planning computer such that the surgeon can take full advantage of the most reliable

  18. A special kind of local structure in the CMB intensity maps: duel peak structure

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Ti-Pei

    2009-03-01

    We study the local structure of Cosmic Microwave Background (CMB) temperature maps released by the Wilkinson Microwave Anisotropy Probe (WMAP) team, and find a new kind of structure, which can be described as follows: a peak (or valley) of average temperature is often followed by a peak of temperature fluctuation that is 4° away. This structure is important for the following reasons: both the well known cold spot detected by Cruz et al. and the hot spot detected by Vielva et al. with the same technology (the third spot in their article) have such structure; more spots that are similar to them can be found on CMB maps and they also tend to be significant cold/hot spots; if we change the 4° characteristic into an artificial one, such as 3° or 5°, there will be less 'similar spots', and the temperature peaks or valleys will be less significant. The presented 'similar spots' have passed a strict consistency test which requires them to be significant on at least three different CMB temperature maps. We hope that this article could arouse some interest in the relationship of average temperature with temperature fluctuation in local areas; meanwhile, we are also trying to find an explanation for it which might be important to CMB observation and theory.

  19. Intensity and Range Image Based Features for Object Detection in Mobile Mapping Data

    NASA Astrophysics Data System (ADS)

    Palmer, R.; Borck, M.; West, G.; Tan, T.

    2012-07-01

    Mobile mapping is used for asset management, change detection, surveying and dimensional analysis. There is a great desire to automate these processes given the very large amounts of data, especially when 3-D point cloud data is combined with co-registered imagery - termed "3-D images". One approach requires low-level feature extraction from the images and point cloud data followed by pattern recognition and machine learning techniques to recognise the various high level features (or objects) in the images. This paper covers low-level feature analysis and investigates a number of different feature extraction methods for their usefulness. The features of interest include those based on the "bag of words" concept in which many low-level features are used e.g. histograms of gradients, as well as those describing the saliency (how unusual a region of the image is). These mainly image based features have been adapted to deal with 3-D images. The performance of the various features are discussed for typical mobile mapping scenarios and recommendations made as to the best features to use.

  20. From Recollisions to the Knee: A Road Map for Double Ionization in Intense Laser Fields

    SciTech Connect

    Mauger, F.; Chandre, C.; Uzer, T.

    2010-01-29

    We examine the nature and statistical properties of electron-electron collisions in the recollision process in a strong laser field. The separation of the double ionization yield into sequential and nonsequential components leads to a bell-shaped curve for the nonsequential probability and a monotonically rising one for the sequential process. We identify key features of the nonsequential process and connect our findings in a simplified model which reproduces the knee shape for the probability of double ionization with laser intensity and associated trends.

  1. Dynamic T{sub 2}-mapping during magnetic resonance guided high intensity focused ultrasound ablation of bone marrow

    SciTech Connect

    Waspe, Adam C.; Looi, Thomas; Mougenot, Charles; Amaral, Joao; Temple, Michael; Sivaloganathan, Siv; Drake, James M.

    2012-11-28

    Focal bone tumor treatments include amputation, limb-sparing surgical excision with bone reconstruction, and high-dose external-beam radiation therapy. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is an effective non-invasive thermotherapy for palliative management of bone metastases pain. MR thermometry (MRT) measures the proton resonance frequency shift (PRFS) of water molecules and produces accurate (<1 Degree-Sign C) and dynamic (<5s) thermal maps in soft tissues. PRFS-MRT is ineffective in fatty tissues such as yellow bone marrow and, since accurate temperature measurements are required in the bone to ensure adequate thermal dose, MR-HIFU is not indicated for primary bone tumor treatments. Magnetic relaxation times are sensitive to lipid temperature and we hypothesize that bone marrow temperature can be determined accurately by measuring changes in T{sub 2}, since T{sub 2} increases linearly in fat during heating. T{sub 2}-mapping using dual echo times during a dynamic turbo spin-echo pulse sequence enabled rapid measurement of T{sub 2}. Calibration of T{sub 2}-based thermal maps involved heating the marrow in a bovine femur and simultaneously measuring T{sub 2} and temperature with a thermocouple. A positive T{sub 2} temperature dependence in bone marrow of 20 ms/ Degree-Sign C was observed. Dynamic T{sub 2}-mapping should enable accurate temperature monitoring during MR-HIFU treatment of bone marrow and shows promise for improving the safety and reducing the invasiveness of pediatric bone tumor treatments.

  2. Near-Surface Geophysical Mapping of the Hydrological Response to an Intense Rainfall Event at the Field Scale

    NASA Astrophysics Data System (ADS)

    Martínez, G.; Vanderlinden, K.; Giraldez, J. V.; Espejo, A. J.; Muriel, J. L.

    2009-12-01

    Soil moisture plays an important role in a wide variety of biogeochemical fluxes in the soil-plant-atmosphere system and governs the (eco)hydrological response of a catchment to an external forcing such as rainfall. Near-surface electromagnetic induction (EMI) sensors that measure the soil apparent electrical conductivity (ECa) provide a fast and non-invasive means for characterizing this response at the field or catchment scale through high-resolution time-lapse mapping. Here we show how ECa maps, obtained before and after an intense rainfall event of 125 mm h-1, elucidate differences in soil moisture patterns and hydrologic response of an experimental field as a consequence of differed soil management. The dryland field (Vertisol) was located in SW Spain and cropped with a typical wheat-sunflower-legume rotation. Both, near-surface and subsurface ECa (ECas and ECad, respectively), were measured using the EM38-DD EMI sensor in a mobile configuration. Raw ECa measurements and Mean Relative Differences (MRD) provided information on soil moisture patterns while time-lapse maps were used to evaluate the hydrologic response of the field. ECa maps of the field, measured before and after the rainfall event showed similar patterns. The field depressions where most of water and sediments accumulated had the highest ECa and MRD values. The SE-oriented soil, which was deeper and more exposed to sun and wind, showed the lowest ECa and MRD. The largest differences raised in the central part of the field where a high ECa and MRD area appeared after the rainfall event as a consequence of the smaller soil depth and a possible subsurface flux concentration. Time-lapse maps of both ECa and MRD were also similar. The direct drill plots showed higher increments of ECa and MRD as a result of the smaller runoff production. Time-lapse ECa increments showed a bimodal distribution differentiating clearly the direct drill from the conventional and minimum tillage plots. However this kind

  3. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  4. Maps showing petroleum exploration intensity and production in major Cambrian to Ordovician reservoir rocks in the Anadarko Basin

    USGS Publications Warehouse

    Henry, Mitch; Hester, Tim

    1996-01-01

    The Anadarko basin is a large, deep, two-stage Paleozoic basin (Feinstein, 1981) that is petroleum rich and generally well explored. The Anadarko basin province, a geogrphic area used here mostly for the convenience of mapping and data management, is defined by political boundaries that include the Anadarko basin proper. The boundaries of the province are identical to those used by the U.S. Geological Survey (USGS) in the 1995 National Assessment of United Stated Oil and Gas Resources. The data in this report, also identical to those used in the national assessment, are from several computerized data bases including Nehring Research Group (NRG) Associates Inc., Significant Oil and Gas Fields of the United States (1992); Petroleum Information (PI), Inc., Well History Control System (1991); and Petroleum Information (PI), Inc., Petro-ROM: Production data on CD-ROM (1993). Although generated mostly in response to the national assessment, the data presented here arc grouped differently and arc displayed and described in greater detail. In addition, the stratigraphic sequences discussed may not necessarily correlate with the "plays" of the 1995 national assessment. This report uses computer-generated maps to show drilling intensity, producing wells, major fields, and other geologic information relevant to petroleum exploration and production in the lower Paleozoic part of the Anadarko basin province as defined for the U.S. Geological Survey's 1995 national petroleum assessment. Hydrocarbon accumulations must meet a minimum standard of 1 million barrels of oil (MMBO) or 6 billion cubic feet of gas (BCFG) estimated ultimate recovery to be included in this report as a major field or revoir. Mapped strata in this report include the Upper Cambrian to Lower Ordovician Arbuckle and Low Ordovician Ellenburger Groups, the Middle Ordovician Simpson Group, and the Middle to Upper Ordovician Viola Group.

  5. Fiber-bundle microendoscopy with sub-diffuse reflectance spectroscopy and intensity mapping for multimodal optical biopsy of stratified epithelium

    PubMed Central

    Greening, Gage J.; James, Haley M.; Powless, Amy J.; Hutcheson, Joshua A.; Dierks, Mary K.; Rajaram, Narasimhan; Muldoon, Timothy J.

    2015-01-01

    Early detection of structural or functional changes in dysplastic epithelia may be crucial for improving long-term patient care. Recent work has explored myriad non-invasive or minimally invasive “optical biopsy” techniques for diagnosing early dysplasia, such as high-resolution microendoscopy, a method to resolve sub-cellular features of apical epithelia, as well as broadband sub-diffuse reflectance spectroscopy, a method that evaluates bulk health of a small volume of tissue. We present a multimodal fiber-based microendoscopy technique that combines high-resolution microendoscopy, broadband (450-750 nm) sub-diffuse reflectance spectroscopy (sDRS) at two discrete source-detector separations (374 and 730 μm), and sub-diffuse reflectance intensity mapping (sDRIM) using a 635 nm laser. Spatial resolution, magnification, field-of-view, and sampling frequency were determined. Additionally, the ability of the sDRS modality to extract optical properties over a range of depths is reported. Following this, proof-of-concept experiments were performed on tissue-simulating phantoms made with poly(dimethysiloxane) as a substrate material with cultured MDA-MB-468 cells. Then, all modalities were demonstrated on a human melanocytic nevus from a healthy volunteer and on resected colonic tissue from a murine model. Qualitative in vivo image data is correlated with reduced scattering and absorption coefficients. PMID:26713207

  6. Fiber-bundle microendoscopy with sub-diffuse reflectance spectroscopy and intensity mapping for multimodal optical biopsy of stratified epithelium.

    PubMed

    Greening, Gage J; James, Haley M; Powless, Amy J; Hutcheson, Joshua A; Dierks, Mary K; Rajaram, Narasimhan; Muldoon, Timothy J

    2015-12-01

    Early detection of structural or functional changes in dysplastic epithelia may be crucial for improving long-term patient care. Recent work has explored myriad non-invasive or minimally invasive "optical biopsy" techniques for diagnosing early dysplasia, such as high-resolution microendoscopy, a method to resolve sub-cellular features of apical epithelia, as well as broadband sub-diffuse reflectance spectroscopy, a method that evaluates bulk health of a small volume of tissue. We present a multimodal fiber-based microendoscopy technique that combines high-resolution microendoscopy, broadband (450-750 nm) sub-diffuse reflectance spectroscopy (sDRS) at two discrete source-detector separations (374 and 730 μm), and sub-diffuse reflectance intensity mapping (sDRIM) using a 635 nm laser. Spatial resolution, magnification, field-of-view, and sampling frequency were determined. Additionally, the ability of the sDRS modality to extract optical properties over a range of depths is reported. Following this, proof-of-concept experiments were performed on tissue-simulating phantoms made with poly(dimethysiloxane) as a substrate material with cultured MDA-MB-468 cells. Then, all modalities were demonstrated on a human melanocytic nevus from a healthy volunteer and on resected colonic tissue from a murine model. Qualitative in vivo image data is correlated with reduced scattering and absorption coefficients.

  7. Practical procedure for retrieval of quantitative phase map for two-phase interface using the transport of intensity equation.

    PubMed

    Zhang, Xiaobin; Oshima, Yoshifumi

    2015-11-01

    A practical procedure for retrieving quantitative phase distribution at the interface between a thin amorphous germanium (a-Ge) film and vacuum based on the transport of intensity equation is proposed. First, small regions were selected in transmission electron microscopy (TEM) images with three different focus settings in order to avoid phase modulation due to low frequency noise. Second, the selected TEM image and its three reflected images were combined for mirror-symmetry to meet the boundary requirements. However, in this symmetrization, extra phase modulation arose due to the discontinuous nature of Fresnel fringes at the boundaries among the four parts of the combined image. Third, a corrected phase map was obtained by subtracting a linear fit to the extra phase modulation. The phase shift for a thin a-Ge film was determined to be approximately 0.5 rad, indicating that the average inner potential was 18.3 V. The validity of the present phase retrieval is discussed using simple simulations. PMID:26177522

  8. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  9. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  10. Mapping seismic intensity using twitter data; A Case study: The February 26th, 2014 M5.9 Kefallinia (Greece) earthquake

    NASA Astrophysics Data System (ADS)

    Arapostathis, Stathis; Parcharidis, Isaak; Kalogeras, Ioannis; Drakatos, George

    2015-04-01

    In this paper we present an innovative approach for the development of seismic intensity maps in minimum time frame. As case study, a recent earthquake that occurred in Western Greece (Kefallinia Island, on February 26, 2014) is used. The magnitude of the earthquake was M=5.9 (Institute of Geodynamics - National Observatory of Athens). Earthquake's effects comprising damages in property and changes of the physical environment in the area. The innovative part of this research is that we use crowdsourcing as a source to assess macroseismic intensity information, coming out from twitter content. Twitter as a social media service with micro-blogging characteristics, a semantic structure which allows the storage of spatial content, and a high volume production of user generated content is a suitable source to obtain and extract knowledge related to macroseismic intensity in different geographic areas and in short time periods. Moreover the speed in which twitter content is generated affects us to have accurate results only a few hours after the occurrence of the earthquake. The method used in order to extract, evaluate and map the intensity related information is described in brief in this paper. At first, we pick out all the tweets that have been posted within the first 48 hours, including information related to intensity and refer to a geographic location. The geo-referencing of these tweets and their association with an intensity grade according to the European Macroseismic Scale (EMS98) based on the information they contain in text followed. Finally, we apply various spatial statistics and GIS methods, and we interpolate the values to cover all the appropriate geographic areas. The final output contains macroseismic intensity maps for the Lixouri area (Kefallinia Island), produced from twitter data that have been posted in the first six, twelve, twenty four and forty eight hours after the earthquake occurrence. Results are compared with other intensity maps for same

  11. VizieR Online Data Catalog: CMB intensity map from WMAP and Planck PR2 data (Bobin+, 2016)

    NASA Astrophysics Data System (ADS)

    Bobin, J.; Sureau, F.; Starck, J.-L.

    2016-05-01

    This paper presents a novel estimation of the CMB map reconstructed from the Planck 2015 data (PR2) and the WMAP nine-year data (Bennett et al., 2013ApJS..208...20B), which updates the CMB map we published in (Bobin et al., 2014A&A...563A.105B). This new map is based on the sparse component separation method L-GMCA (Bobin et al., 2013A&A...550A..73B). Additionally, the map benefits from the latest advances in this field (Bobin et al., 2015, IEEE Transactions on Signal Processing, 63, 1199), which allows us to accurately discriminate between correlated components. In this update to our previous work, we show that this new map presents significant improvements with respect to the available CMB map estimates. (3 data files).

  12. Intensive Training Course on Microplanning and School Mapping (Arusha, United Republic of Tanzania, March 8-26, 1982). Report.

    ERIC Educational Resources Information Center

    Caillods, F.; Heyman, S.

    This manual contains documentation of a 3-week course conducted jointly in March 1982 by the Tanzanian Ministry of Education and the International Institute for Educational Planning on the subject of the school map (or micro-plan). Prepared at the regional or subregional level, the school map aims at equalizing educational opportunities and…

  13. Sea floor maps showing topography, sun-illuminated topography, and backscatter intensity of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, P.C.; Middleton, T.J.; Fuller, S.J.

    2000-01-01

    This data set contains the sea floor topographic contours, sun-illuminated topographic imagery, and backscatter intensity generated from a multibeam sonar survey of the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts, an area of approximately 1100 square nautical miles. The Stellwagen Bank NMS Mapping Project is designed to provide detailed maps of the Stellwagen Bank region's environments and habitats and the first complete multibeam topographic and sea floor characterization maps of a significant region of the shallow EEZ. Data were collected on four cruises over a two year period from the fall of 1994 to the fall of 1996. The surveys were conducted aboard the Candian Hydrographic Service vessel Frederick G. Creed, a SWATH (Small Waterplane Twin Hull) ship that surveys at speeds of 16 knots. The multibeam data were collected utilizing a Simrad Subsea EM 1000 Multibeam Echo Sounder (95 kHz) that is permanently installed in the hull of the Creed.

  14. Intensive Linkage Mapping in a Wasp (Bracon Hebetor) and a Mosquito (Aedes Aegypti) with Single-Strand Conformation Polymorphism Analysis of Random Amplified Polymorphic DNA Markers

    PubMed Central

    Antolin, M. F.; Bosio, C. F.; Cotton, J.; Sweeney, W.; Strand, M. R.; Black-IV, W. C.

    1996-01-01

    The use of random amplified polymorphic DNA from the polymerase chain reaction (RAPD-PCR) allows efficient construction of saturated linkage maps. However, when analyzed by agarose gel electrophoresis, most RAPD-PCR markers segregate as dominant alleles, reducing the amount of linkage information obtained. We describe the use of single strand conformation polymorphism (SSCP) analysis of RAPD markers to generate linkage maps in a haplodiploid parasitic wasp Bracon (Habrobracon) hebetor and a diploid mosquito, Aedes aegypti. RAPD-SSCP analysis revealed segregation of codominant alleles at markers that appeared to segregate as dominant (band presence/band absence) markers or appeared invariant on agarose gels. Our SSCP protocol uses silver staining to detect DNA fractionated on large thin polyacrylamide gels and reveals more polymorphic markers than agarose gel electrophoresis. In B. hebetor, 79 markers were mapped with 12 RAPD primers in six weeks; in A. aegypti, 94 markers were mapped with 10 RAPD primers in five weeks. Forty-five percent of markers segregated as codominant loci in B. hebetor, while 11% segregated as codominant loci in A. aegypti. SSCP analysis of RAPD-PCR markers offers a rapid and inexpensive means of constructing intensive linkage maps of many species. PMID:8844159

  15. Freeform lens design for light-emitting diode uniform illumination by using a method of source-target luminous intensity mapping.

    PubMed

    Chen, Jin-Jia; Huang, Ze-Yu; Liu, Te-Shu; Tsai, Ming-Da; Huang, Kuang-Lung

    2015-10-01

    We present a freeform lens for application to LED uniform illumination. This lens, which is designed with a method of simple source-target luminous intensity mapping, can produce irradiance uniformity of greater than 0.8 and optical efficiency above 90% with an arbitrary half-beam angle greater than 45 deg. Typically, as compared with a conventional source-target energy mapping method, this design method can achieve better optical performance of lenses for general LED lighting. When a non-Lambertian-type light source is employed, for example, the chip on board LED, the use of the method can result in a compact LED lens without losing the optical performances of high irradiance uniformity and high optical efficiency as yielded by lenses for Lambertian-type LED light sources. PMID:26479644

  16. Mapping grape berry photosynthesis by chlorophyll fluorescence imaging: the effect of saturating pulse intensity in different tissues.

    PubMed

    Breia, Richard; Vieira, Sónia; da Silva, Jorge Marques; Gerós, Hernâni; Cunha, Ana

    2013-01-01

    Grape berry development and ripening depends mainly on imported photosynthates from leaves, however, fruit photosynthesis may also contribute to the carbon economy of the fruit. In this study pulse amplitude modulated chlorophyll fluorescence imaging (imaging-PAM) was used to assess photosynthetic properties of tissues of green grape berries. In particular, the effect of the saturation pulse (SP) intensity was investigated. A clear tissue-specific distribution pattern of photosynthetic competence was observed. The exocarp revealed the highest photosynthetic capacity and the lowest susceptibility to photoinhibition, and the mesocarp exhibited very low fluorescence signals and photochemical competence. Remarkably, the seed outer integument revealed a photosynthetic ability similar to that of the exocarp. At a SP intensity of 5000 μmol m(-2) s(-1) several photochemical parameters were decreased, including maximum fluorescence in dark-adapted (F(m)) and light-adapted (F'(m)) samples and effective quantum yield of PSII (Φ(II)), but the inner tissues were susceptible to a SP intensity as low as 3200 μmol m(-2) s(-1) under light-adapted conditions, indicating a photoinhibitory interaction between SP and actinic light intensities and repetitive exposure to SP. These results open the way to further studies concerning the involvement of tissue-specific photosynthesis in the highly compartmentalized production and accumulation of organic compounds during grape berry development.

  17. Update on the Mapping of Prevalence and Intensity of Infection for Soil-Transmitted Helminth Infections in Latin America and the Caribbean: A Call for Action

    PubMed Central

    Saboyá, Martha Idalí; Catalá, Laura; Nicholls, Rubén Santiago; Ault, Steven Kenyon

    2013-01-01

    It is estimated that in Latin America and the Caribbean (LAC) at least 13.9 million preschool age and 35.4 million school age children are at risk of infections by soil-transmitted helminths (STH): Ascaris lumbricoides, Trichuris trichiura and hookworms (Necator americanus and Ancylostoma duodenale). Although infections caused by this group of parasites are associated with chronic deleterious effects on nutrition and growth, iron and vitamin A status and cognitive development in children, few countries in the LAC Region have implemented nationwide surveys on prevalence and intensity of infection. The aim of this study was to identify gaps on the mapping of prevalence and intensity of STH infections based on data published between 2000 and 2010 in LAC, and to call for including mapping as part of action plans against these infections. A total of 335 published data points for STH prevalence were found for 18 countries (11.9% data points for preschool age children, 56.7% for school age children and 31.3% for children from 1 to 14 years of age). We found that 62.7% of data points showed prevalence levels above 20%. Data on the intensity of infection were found for seven countries. The analysis also highlights that there is still an important lack of data on prevalence and intensity of infection to determine the burden of disease based on epidemiological surveys, particularly among preschool age children. This situation is a challenge for LAC given that adequate planning of interventions such as deworming requires information on prevalence to determine the frequency of needed anthelmintic drug administration and to conduct monitoring and evaluation of progress in drug coverage. PMID:24069476

  18. Characterization of three-dimensional spatial aggregation and association patterns of brown rot symptoms within intensively mapped sour cherry trees

    PubMed Central

    Everhart, Sydney E.; Askew, Ashley; Seymour, Lynne; Holb, Imre J.; Scherm, Harald

    2011-01-01

    Background and Aims Characterization of spatial patterns of plant disease can provide insights into important epidemiological processes such as sources of inoculum, mechanisms of dissemination, and reproductive strategies of the pathogen population. Whilst two-dimensional patterns of disease (among plants within fields) have been studied extensively, there is limited information on three-dimensional patterns within individual plant canopies. Reported here are the detailed mapping of different symptom types of brown rot (caused by Monilinia laxa) in individual sour cherry tree (Prunus cerasus) canopies, and the application of spatial statistics to the resulting data points to determine patterns of symptom aggregation and association. Methods A magnetic digitizer was utilized to create detailed three-dimensional maps of three symptom types (blossom blight, shoot blight and twig canker) in eight sour cherry tree canopies during the green fruit stage of development. The resulting point patterns were analysed for aggregation (within a given symptom type) and pairwise association (between symptom types) using a three-dimensional extension of nearest-neighbour analysis. Key Results Symptoms of M. laxa infection were generally aggregated within the canopy volume, but there was no consistent pattern for one symptom type to be more or less aggregated than the other. Analysis of spatial association among symptom types indicated that previous year's twig cankers may play an important role in influencing the spatial pattern of current year's symptoms. This observation provides quantitative support for the epidemiological role of twig cankers as sources of primary inoculum within the tree. Conclusions Presented here is a new approach to quantify spatial patterns of plant disease in complex fruit tree canopies using point pattern analysis. This work provides a framework for quantitative analysis of three-dimensional spatial patterns within the finite tree canopy, applicable to

  19. Evaluation of Ground-Motion Modeling Techniques for Use in Global ShakeMap - A Critique of Instrumental Ground-Motion Prediction Equations, Peak Ground Motion to Macroseismic Intensity Conversions, and Macroseismic Intensity Predictions in Different Tectonic Settings

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.

    2009-01-01

    Regional differences in ground-motion attenuation have long been thought to add uncertainty in the prediction of ground motion. However, a growing body of evidence suggests that regional differences in ground-motion attenuation may not be as significant as previously thought and that the key differences between regions may be a consequence of limitations in ground-motion datasets over incomplete magnitude and distance ranges. Undoubtedly, regional differences in attenuation can exist owing to differences in crustal structure and tectonic setting, and these can contribute to differences in ground-motion attenuation at larger source-receiver distances. Herein, we examine the use of a variety of techniques for the prediction of several ground-motion metrics (peak ground acceleration and velocity, response spectral ordinates, and macroseismic intensity) and compare them against a global dataset of instrumental ground-motion recordings and intensity assignments. The primary goal of this study is to determine whether existing ground-motion prediction techniques are applicable for use in the U.S. Geological Survey's Global ShakeMap and Prompt Assessment of Global Earthquakes for Response (PAGER). We seek the most appropriate ground-motion predictive technique, or techniques, for each of the tectonic regimes considered: shallow active crust, subduction zone, and stable continental region.

  20. Spatial-temporal three-dimensional ultrasound plane-by-plane active cavitation mapping for high-intensity focused ultrasound in free field and pulsatile flow.

    PubMed

    Ding, Ting; Hu, Hong; Bai, Chen; Guo, Shifang; Yang, Miao; Wang, Supin; Wan, Mingxi

    2016-07-01

    Cavitation plays important roles in almost all high-intensity focused ultrasound (HIFU) applications. However, current two-dimensional (2D) cavitation mapping could only provide cavitation activity in one plane. This study proposed a three-dimensional (3D) ultrasound plane-by-plane active cavitation mapping (3D-UPACM) for HIFU in free field and pulsatile flow. The acquisition of channel-domain raw radio-frequency (RF) data in 3D space was performed by sequential plane-by-plane 2D ultrafast active cavitation mapping. Between two adjacent unit locations, there was a waiting time to make cavitation nuclei distribution of the liquid back to the original state. The 3D cavitation map equivalent to the one detected at one time and over the entire volume could be reconstructed by Marching Cube algorithm. Minimum variance (MV) adaptive beamforming was combined with coherence factor (CF) weighting (MVCF) or compressive sensing (CS) method (MVCS) to process the raw RF data for improved beamforming or more rapid data processing. The feasibility of 3D-UPACM was demonstrated in tap-water and a phantom vessel with pulsatile flow. The time interval between temporal evolutions of cavitation bubble cloud could be several microseconds. MVCF beamformer had a signal-to-noise ratio (SNR) at 14.17dB higher, lateral and axial resolution at 2.88times and 1.88times, respectively, which were compared with those of B-mode active cavitation mapping. MVCS beamformer had only 14.94% time penalty of that of MVCF beamformer. This 3D-UPACM technique employs the linear array of a current ultrasound diagnosis system rather than a 2D array transducer to decrease the cost of the instrument. Moreover, although the application is limited by the requirement for a gassy fluid medium or a constant supply of new cavitation nuclei that allows replenishment of nuclei between HIFU exposures, this technique may exhibit a useful tool in 3D cavitation mapping for HIFU with high speed, precision and resolution

  1. Spatial-temporal three-dimensional ultrasound plane-by-plane active cavitation mapping for high-intensity focused ultrasound in free field and pulsatile flow.

    PubMed

    Ding, Ting; Hu, Hong; Bai, Chen; Guo, Shifang; Yang, Miao; Wang, Supin; Wan, Mingxi

    2016-07-01

    Cavitation plays important roles in almost all high-intensity focused ultrasound (HIFU) applications. However, current two-dimensional (2D) cavitation mapping could only provide cavitation activity in one plane. This study proposed a three-dimensional (3D) ultrasound plane-by-plane active cavitation mapping (3D-UPACM) for HIFU in free field and pulsatile flow. The acquisition of channel-domain raw radio-frequency (RF) data in 3D space was performed by sequential plane-by-plane 2D ultrafast active cavitation mapping. Between two adjacent unit locations, there was a waiting time to make cavitation nuclei distribution of the liquid back to the original state. The 3D cavitation map equivalent to the one detected at one time and over the entire volume could be reconstructed by Marching Cube algorithm. Minimum variance (MV) adaptive beamforming was combined with coherence factor (CF) weighting (MVCF) or compressive sensing (CS) method (MVCS) to process the raw RF data for improved beamforming or more rapid data processing. The feasibility of 3D-UPACM was demonstrated in tap-water and a phantom vessel with pulsatile flow. The time interval between temporal evolutions of cavitation bubble cloud could be several microseconds. MVCF beamformer had a signal-to-noise ratio (SNR) at 14.17dB higher, lateral and axial resolution at 2.88times and 1.88times, respectively, which were compared with those of B-mode active cavitation mapping. MVCS beamformer had only 14.94% time penalty of that of MVCF beamformer. This 3D-UPACM technique employs the linear array of a current ultrasound diagnosis system rather than a 2D array transducer to decrease the cost of the instrument. Moreover, although the application is limited by the requirement for a gassy fluid medium or a constant supply of new cavitation nuclei that allows replenishment of nuclei between HIFU exposures, this technique may exhibit a useful tool in 3D cavitation mapping for HIFU with high speed, precision and resolution

  2. Advances In Cryogenic Monolithic Millimeter-wave Integrated Circuit (MMIC) Low Noise Amplifiers For CO Intensity Mapping and ALMA Band 2

    NASA Astrophysics Data System (ADS)

    Samoska, Lorene; Cleary, Kieran; Church, Sarah E.; Cuadrado-Calle, David; Fung, Andy; gaier, todd; gawande, rohit; Kangaslahti, Pekka; Lai, Richard; Lawrence, Charles R.; Readhead, Anthony C. S.; Sarkozy, Stephen; Seiffert, Michael D.; Sieth, Matthew

    2016-01-01

    We will present results of the latest InP HEMT MMIC low noise amplifiers in the 30-300 GHz range, with emphasis on LNAs and mixers developed for CO intensity mapping in the 40-80 GHz range, as well as MMIC LNAs suitable for ALMA Band 2 (67-90 GHz). The LNAs have been developed together with NGC in a 35 nm InP HEMT MMIC process. Recent results and a summary of best InP low noise amplifier data will be presented. This work describes technologies related to the detection and study of highly redshifted spectral lines from the CO molecule, a key tracer for molecular hydrogen. One of the most promising techniques for observing the Cosmic Dawn is intensity mapping of spectral-spatial fluctuations of line emission from neutral hydrogen (H I), CO, and [C II]. The essential idea is that instead of trying to detect line emission from individual galaxies, one measures the total line emission from a number of galaxies within the volume defined by a spectral-spatial pixel. Fluctuations from pixel to pixel trace large scale structure, and the evolution with redshift is revealed as a function of receiver frequency. A special feature of CO is the existence of multiple lines with a well-defined frequency relationship from the rotational ladder, which allows the possibility of cleanly separating the signal from other lines or foreground structure at other redshifts. Making use of this feature (not available to either HI or [C II] measurements) requires observing multiple frequencies, including the range 40-80 GHz, much of which is inaccessible from the ground or balloons.Specifically, the J=1->0 transition frequency is 115 GHz; J=2->1 is 230 GHz; J=3->2 is 345 GHz, etc. At redshift 7, these lines would appear at 14.4, 28.8, and 43.2 GHz, accessible from the ground. Over a wider range of redshifts, from 3 to 7, these lines would appear at frequencies from 14 to 86 GHz. A ground-based CO Intensity mapping experiment, COMAP, will utilize InP-based HEMT MMIC amplifier front ends in the

  3. dispel4py : An Open Source Python Framework for Encoding, Mapping and Reusing Seismic Continuous Data Streams: Intensive Analysis and Data Mining.

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Krause, A.; Atkinson, M.; Spinuso, A.; Klampanos, I.; Magnoni, F.; Casarotti, E.; Vilotte, J. P.

    2015-12-01

    Scientific workflows are needed by many scientific communities, such as seismology, as they enable easy composition and execution of applications, enabling scientists to focus on their research without being distracted by arranging computation and data management. However, there are challenges to be addressed. In many systems users have to adapt their codes and data movement as they change from one HPC-architecture to another. They still need to be aware of the computing architectures available for achieving the best application performance. We present dispel4py, an open-source framework presented as a Python library for encoding and automating data-intensive scientific methods as a graph of operations coupled together by data-streams. It enables scientists to develop and experiment with their own data-intensive applications using their familiar work environment. These are then automatically mapped to a variety of HPC-architectures, i.e., MPI, multiprocessing, Storm and Spark frameworks, increasing the chances to reuse their applications in different computing resources. dispel4py comes with data provenance, as shown in the screenshot, and with an information registry that can be accessed transparently from within workflows. dispel4py has been enhanced with a new run-time adaptive compression strategy to reduce the data stream volume and a diagnostic tool which monitors workflow performance and computes the most efficient parallelisation to use. dispel4py has been used by seismologists in the project VERCE for seismic ambient noise cross-correlation applications and for orchestrated HPC wave simulation and data misfit analysis workflows; two data-intensive problems that are common in today's research practice. Both have been tested in several local computing resources and later submitted to a variety of European PRACE HPC-architectures (e.g. SuperMUC & CINECA) for longer runs without change. Results show that dispel4py is an easy tool for developing, sharing and

  4. The USGS ``Did You Feel It?'' Internet-based Macroseismic Intensity Maps: Lessons Learned from a Decade of Online Data Collection (Invited)

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Quitoriano, V. R.; Hopper, M.; Mathias, S.; Dewey, J. W.

    2010-12-01

    Over the past decade, the U.S. Geological Survey’s “Did You Feel It?” (DYFI) system has automatically collected shaking and damage reports from Internet users immediately following earthquakes. This 10-yr stint of citizen-based science preceded the recently in vogue notion of "crowdsourcing" by nearly a decade. DYFI is a rapid and vast source of macroseismic data, providing quantitative and qualitative information about shaking intensities for earthquakes in the US and around the globe. Statistics attest to the abundance and rapid availability of these Internet-based macroseismic data: Over 1.8 million entries have been logged over the decade, and there are 30 events each with over 10,000 responses (230 events have over 1,000 entries). The greatest number of responses to date for an earthquake is over 78,000 for the April 2010, M7.2 Baja California, Mexico, event. Questionnaire response rates have reached 62,000 per hour (1,000 per min!) obviously requiring substantial web resource allocation and capacity. Outside the US, DYFI has gathered over 189,000 entries in 9,500 cities covering 140 countries since its global inception in late 2004. The rapid intensity data are automatically used in the Global ShakeMap (GSM) system, providing intensity constraints near population centers and in places without instrumental coverage (most of the world), and allowing for bias correction to the empirical prediction equations employed. ShakeMap has also been recently refined to automatically use macroseismic input data in their native form, and treat their uncertainties rigorously in concert with ground-motion data. Recent DYFI system improvements include a graphical user interface that allows seismic analysts to perform common functions, including map triggering and resizing , as well as sorting, searching, geocoding, and flagging entries. New web-based geolocation and geocoding services are being incorporated into DYFI for improving the accuracy of the users’ locations

  5. Volume transfer constant (K(trans)) maps from dynamic contrast enhanced MRI as potential guidance for MR-guided high intensity focused ultrasound treatment of hypervascular uterine fibroids.

    PubMed

    Liu, Jing; Keserci, Bilgin; Yang, Xuedong; Wei, Juan; Rong, Rong; Zhu, Ying; Wang, Xiaoying

    2014-11-01

    Higher perfusion of uterine fibroids at baseline is recognized as cause for poor efficacy of MR-guided high intensity focused ultrasound (HIFU) ablation, and higher acoustic power has been suggested for the treatment of high-perfused areas inside uterine fibroids. However, considering the heterogeneously vascular distribution inside the uterine fibroids especially with hyper vascularity, it is not easy to choose the correct therapy acoustic power for every part inside fibroids. In our study, we presented two cases of fibroids with hyper vascularity, to show the differences between them with different outcomes. Selecting higher therapy acoustic powers to ablate high-perfused areas efficiently inside fibroids might help achieving good ablation results. Volume transfer constant (K(trans)) maps from dynamic contrast-enhanced (DCE) imaging at baseline helps visualizing perfusion state inside the fibroids and locating areas with higher-perfusion. In addition, with the help of K(trans) maps, appropriate therapy acoustic power could be selected by the result of initial test and therapy sonications at different areas with significantly different perfusion state inside fibroids.

  6. A nearly complete longitude-velocity map of neutral hydrogen

    NASA Technical Reports Server (NTRS)

    Waldes, F.

    1978-01-01

    A longitude-velocity map based on two recent 21-cm neutral hydrogen surveys and covering all but 42 deg of galactic longitude is presented. Latitude information between -2 and +2 deg is included as an integrated quantity by averaging the observed brightness temperatures over latitude at constant longitude and velocity to produce intensity information corresponding to a surface density distribution of neutral hydrogen in the galactic plane. The northern and southern rotation curves of the Galaxy within the solar galactic orbit are derived from the maximum radial velocities by the usual tangent-point method. Five interesting features of the map are discussed: (1) the scale of density variations in the neutral hydrogen; (2) a region of very high brightness centered at 81 deg and 0 km/s which is probably due to the spiral arm with which the sun is associated; (3) a region of very low brightness centered at 242 deg and 39 km/s; (4) negative-velocity features visible in the anticenter direction; and (5) a strong absorption feature at 289 deg having a kinematic distance of about 4 kpc.

  7. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  8. Multiredshift Limits on the 21 cm Power Spectrum from PAPER

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Pober, Jonathan C.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki S.; Bowman, Judd; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; Gugliucci, Nicole E.; Klima, Pat; Liu, Adrian; MacMahon, David H. E.; Manley, Jason R.; Moore, David F.; Stefan, Irina I.; Walbrugh, William P.

    2015-03-01

    The epoch of the reionization (EoR) power spectrum is expected to evolve strongly with redshift, and it is this variation with cosmic history that will allow us to begin to place constraints on the physics of reionization. The primary obstacle to the measurement of the EoR power spectrum is bright foreground emission. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope, which place new limits on the H i power spectrum over the redshift range of 7.5\\lt z\\lt 10.5, extending previously published single-redshift results to cover the full range accessible to the instrument. To suppress foregrounds, we use filtering techniques that take advantage of the large instrumental bandwidth to isolate and suppress foreground leakage into the interesting regions of k-space. Our 500 hr integration is the longest such yet recorded and demonstrates this method to a dynamic range of 104. Power spectra at different points across the redshift range reveal the variable efficacy of the foreground isolation. Noise-limited measurements of Δ2 at k = 0.2 hr Mpc-1 and z = 7.55 reach as low as (48 mK)2 (1σ). We demonstrate that the size of the error bars in our power spectrum measurement as generated by a bootstrap method is consistent with the fluctuations due to thermal noise. Relative to this thermal noise, most spectra exhibit an excess of power at a few sigma. The likely sources of this excess include residual foreground leakage, particularly at the highest redshift, unflagged radio frequency interference, and calibration errors. We conclude by discussing data reduction improvements that promise to remove much of this excess.

  9. Maps Showing Sea Floor Topography, Sun-Illuminated Sea Floor Topography, and Backscatter Intensity of Quadrangles 1 and 2 in the Great South Channel Region, Western Georges Bank

    USGS Publications Warehouse

    Valentine, Page C.; Middleton, Tammie J.; Malczyk, Jeremy T.; Fuller, Sarah J.

    2002-01-01

    The Great South Channel separates the western part of Georges Bank from Nantucket Shoals and is a major conduit for the exchange of water between the Gulf of Maine to the north and the Atlantic Ocean to the south. Water depths range mostly between 65 and 80 m in the region. A minimum depth of 45 m occurs in the east-central part of the mapped area, and a maximum depth of 100 m occurs in the northwest corner. The channel region is characterized by strong tidal and storm currents that flow dominantly north and south. Major topographic features of the seabed were formed by glacial and postglacial processes. Ice containing rock debris moved from north to south, sculpting the region into a broad shallow depression and depositing sediment to form the irregular depressions and low gravelly mounds and ridges that are visible in parts of the mapped area. Many other smaller glacial featuresprobably have been eroded by waves and currents at worksince the time when the region, formerly exposed bylowered sea level or occupied by ice, was invaded by the sea. The low, irregular and somewhat lumpy fabric formed by the glacial deposits is obscured in places by drifting sand and by the linear, sharp fabric formed by modern sand features. Today, sand transported by the strong north-south-flowing tidal and storm currents has formed large, east-west-trending dunes. These bedforms (ranging between 5 and 20 m in height) contrast strongly with, and partly mask, the subdued topography of the older glacial features.

  10. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  11. The response of the inductively coupled argon plasma to solvent plasma load: spatially resolved maps of electron density obtained from the intensity of one argon line

    NASA Astrophysics Data System (ADS)

    Weir, D. G. J.; Blades, M. W.

    1994-12-01

    A survey of spatially resolved electron number density ( ne) in the tail cone of the inductively coupled argon plasma (ICAP) is presented: all of the results of the survey have been radially inverted by numerical, asymmetric Abel inversion. The survey extends over the entire volume of the plasma beyond the exit of the ICAP torch; It extends over distances of z = 5-25 mm downstream from the induction coil, and over radial distances of ± 8 mm from the discharge axis. The survey also explores a range of inner argon flow rates ( QIN), solvent plasma load ( Qspl) and r.f. power: moreover, it explores loading by water, methanol and chloroform. Throughout the survey, ne was determined from the intensity of one, optically thin argon line, by a method which assumes that the atomic state distribution function (ASDF) for argon lies close to local thermal equilibrium (LTE). The validity of this assumption is reviewed. Also examined are the discrepancies between ne from this method and ne from Stark broadening measurements. With the error taken into account, the results of the survey reveal how time averaged values of ne in the ICAP respond over an extensive, previously unexplored range of experimental parameters. Moreover, the spatial information lends insight into how the thermal conditions and the transport of energy respond. Overall, the response may be described in terms of energy consumption along the axial channel and thermal pinch within the induction region. The predominating effect depends on the solvent plasma load, the solvent composition, the robustness of the discharge, and the distribution of solvent material over the argon stream.

  12. Handmade Multitextured Maps.

    ERIC Educational Resources Information Center

    Trevelyan, Simon

    1984-01-01

    Tactile maps for visually impaired persons can be made by drawing lines with an aqueous adhesive solution, dusting with thermoengraving powder, and exposing the card to a source of intense heat (such as a heat gun or microwave oven). A raised line map results. (CL)

  13. Planetary maps

    USGS Publications Warehouse

    ,

    1992-01-01

    An important goal of the USGS planetary mapping program is to systematically map the geology of the Moon, Mars, Venus, and Mercury, and the satellites of the outer planets. These geologic maps are published in the USGS Miscellaneous Investigations (I) Series. Planetary maps on sale at the USGS include shaded-relief maps, topographic maps, geologic maps, and controlled photomosaics. Controlled photomosaics are assembled from two or more photographs or images using a network of points of known latitude and longitude. The images used for most of these planetary maps are electronic images, obtained from orbiting television cameras, various optical-mechanical systems. Photographic film was only used to map Earth's Moon.

  14. Map accuracy

    USGS Publications Warehouse

    ,

    1981-01-01

    An inaccurate map is not a reliable map. "X" may mark the spot where the treasure is buried, but unless the seeker can locate "X" in relation to known landmarks or positions, the map is not very useful.

  15. Seabed maps showing topography, ruggedness, backscatter intensity, sediment mobility, and the distribution of geologic substrates in Quadrangle 6 of the Stellwagen Bank National Marine Sanctuary Region offshore of Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, Page C.; Gallea, Leslie B.

    2015-11-10

    The U.S. Geological Survey (USGS), in cooperation with the National Oceanic and Atmospheric Administration's National Marine Sanctuary Program, has conducted seabed mapping and related research in the Stellwagen Bank National Marine Sanctuary (SBNMS) region since 1993. The area is approximately 3,700 square kilometers (km2) and is subdivided into 18 quadrangles. Seven maps, at a scale of 1:25,000, of quadrangle 6 (211 km2) depict seabed topography, backscatter, ruggedness, geology, substrate mobility, mud content, and areas dominated by fine-grained or coarse-grained sand. Interpretations of bathymetric and seabed backscatter imagery, photographs, video, and grain-size analyses were used to create the geology-based maps. In all, data from 420 stations were analyzed, including sediment samples from 325 locations. The seabed geology map shows the distribution of 10 substrate types ranging from boulder ridges to immobile, muddy sand to mobile, rippled sand. Mapped substrate types are defined on the basis of sediment grain-size composition, surface morphology, sediment layering, the mobility or immobility of substrate surfaces, and water depth range. This map series is intended to portray the major geological elements (substrates, topographic features, processes) of environments within quadrangle 6. Additionally, these maps will be the basis for the study of the ecological requirements of invertebrate and vertebrate species that utilize these substrates and guide seabed management in the region.

  16. Seabed maps showing topography, ruggedness, backscatter intensity, sediment mobility, and the distribution of geologic substrates in Quadrangle 6 of the Stellwagen Bank National Marine Sanctuary Region offshore of Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, Page C.; Gallea, Leslie B.

    2015-01-01

    The U.S. Geological Survey (USGS), in cooperation with the National Oceanic and Atmospheric Administration's National Marine Sanctuary Program, has conducted seabed mapping and related research in the Stellwagen Bank National Marine Sanctuary (SBNMS) region since 1993. The area is approximately 3,700 square kilometers (km2) and is subdivided into 18 quadrangles. Seven maps, at a scale of 1:25,000, of quadrangle 6 (211 km2) depict seabed topography, backscatter, ruggedness, geology, substrate mobility, mud content, and areas dominated by fine-grained or coarse-grained sand. Interpretations of bathymetric and seabed backscatter imagery, photographs, video, and grain-size analyses were used to create the geology-based maps. In all, data from 420 stations were analyzed, including sediment samples from 325 locations. The seabed geology map shows the distribution of 10 substrate types ranging from boulder ridges to immobile, muddy sand to mobile, rippled sand. Mapped substrate types are defined on the basis of sediment grain-size composition, surface morphology, sediment layering, the mobility or immobility of substrate surfaces, and water depth range. This map series is intended to portray the major geological elements (substrates, topographic features, processes) of environments within quadrangle 6. Additionally, these maps will be the basis for the study of the ecological requirements of invertebrate and vertebrate species that utilize these substrates and guide seabed management in the region.

  17. Aeromagnetic map of Nevada: Las Vegas sheet, Map 95

    SciTech Connect

    Saltus, R.W.; Ponce, D.A.

    1988-01-01

    A 1:250,000-scale map showing total intensity of the Earth's magnetic field at intervals of 10 to 500 nanoteslas, and a 1:1,000,000-scale merged aeromagnetic map. The topographic base with drainage pattern and cultural information is from the Las Vegas 1/degree/ by 2/degree/ Quadrangle. (6 refs.)

  18. Exploring maps

    USGS Publications Warehouse

    ,

    1993-01-01

    Exploring Maps is an interdisciplinary set of materials on mapping for grades 7-12. Students will learn basic mapmaking and map reading skills and will see how maps can answer fundamental geographic questions: "Where am I?" "What else is here?" "Where am I going?"

  19. Contour Mapping

    NASA Technical Reports Server (NTRS)

    1995-01-01

    In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.

  20. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  1. GIS-mapping of environmental assessment of the territories in the region of intense activity for the oil and gas complex for achievement the goals of the Sustainable Development (on the example of Russia)

    NASA Astrophysics Data System (ADS)

    Yermolaev, Oleg

    2014-05-01

    The uniform system of complex scientific-reference ecological-geographical should act as a base for the maintenance of the Sustainable Development (SD) concept in the territories of the Russian Federation subjects or certain regions. In this case, the assessment of the ecological situation in the regions can be solved by the conjugation of the two interrelated system - the mapping and the geoinformational. The report discusses the methodological aspects of the Atlas-mapping for the purposes of SD in the regions of Russia. The Republic of Tatarstan viewed as a model territory where a large-scale oil-gas complex "Tatneft" PLC works. The company functions for more than 60 years. Oil fields occupy an area of more than 38 000 km2; placed in its territory about 40 000 oil wells, more than 55 000 km of pipelines; more than 3 billion tons of oil was extracted. Methods for to the structure and requirements for the Atlas's content were outlined. The approaches to mapping of "an ecological dominant" of SD conceptually substantiated following the pattern of a large region of Russia. Several trends of thematically mapping were suggested to be distinguished in the Atlas's structure: • The background history of oil-fields mine working; • The nature preservation technologies while oil extracting; • The assessment of natural conditions of a humans vital activity; • Unfavorable and dangerous natural processes and phenomena; • The anthropogenic effect and environmental surroundings change; • The social-economical processes and phenomena. • The medical-ecological and geochemical processes and phenomena; Within these groups the other numerous groups can distinguished. The maps of unfavorable and dangerous processes and phenomena subdivided in accordance with the types of processes - of endogenous and exogenous origin. Among the maps of the anthropogenic effects on the natural surroundings one can differentiate the maps of the influence on different nature's spheres

  2. RICH MAPS

    EPA Science Inventory

    Michael Goodchild recently gave eight reasons why traditional maps are limited as communication devices, and how interactive internet mapping can overcome these limitations. In the past, many authorities in cartography, from Jenks to Bertin, have emphasized the importance of sim...

  3. Map adventures

    USGS Publications Warehouse

    1994-01-01

    Map Adventures, with seven accompanying lessons, is appropriate for grades K-3. Students will learn basic concepts for visualizing objects from different perspectives and how to understand /and use maps.

  4. Historical Mapping

    USGS Publications Warehouse

    ,

    1999-01-01

    Maps become out of date over time. Maps that are out of date, however, can be useful to historians, attorneys, environmentalists, genealogists, and others interested in researching the background of a particular area. Local historians can compare a series of maps of the same area compiled over a long period of time to learn how the area developed. A succession of such maps can provide a vivid picture of how a place changed over time.

  5. Topographic mapping

    USGS Publications Warehouse

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  6. Jupiter Atmospheric Map

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Huge cyclonic storms, the Great Red Spot and the Little Red Spot, and wispy cloud patterns are seen in fascinating detail in this map of Jupiter's atmosphere obtained January 14-15, 2007, by the New Horizons Long Range Reconnaissance Imager (LORRI).

    The map combines information from 11 different LORRI images that were taken every hour over a 10-hour period -- a full Jovian day -- from 17:42 UTC on January 14 to 03:42 UTC on January 15. The New Horizons spacecraft was approximately 72 million kilometers (45 million miles) from Jupiter at the time.

    The LORRI pixels on the 'globe' of Jupiter were projected onto a rectilinear grid, similar to the way flat maps of Earth are created. The LORRI pixel intensities were corrected so that every point on the map appears as if the sun were directly overhead; some image sharpening was also applied to enhance detail. The polar regions of Jupiter are not shown on the map because the LORRI images do not sample those latitudes very well and artifacts are produced during the map-projection process.

  7. UK-5 Van Allen belt radiation exposure: A special study to determine the trapped particle intensities on the UK-5 satellite with spatial mapping of the ambient flux environment

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1972-01-01

    Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  8. Low noise parametric amplifiers for radio astronomy observations at 18-21 cm wavelength

    NASA Technical Reports Server (NTRS)

    Kanevskiy, B. Z.; Veselov, V. M.; Strukov, I. A.; Etkin, V. S.

    1974-01-01

    The principle characteristics and use of SHF parametric amplifiers for radiometer input devices are explored. Balanced parametric amplifiers (BPA) are considered as the SHF signal amplifiers allowing production of the amplifier circuit without a special filter to achieve decoupling. Formulas to calculate the basic parameters of a BPA are given. A modulator based on coaxial lines is discussed as the input element of the SHF. Results of laboratory tests of the receiver section and long-term stability studies of the SHF sector are presented.

  9. VizieR Online Data Catalog: Millennium Arecibo 21-cm Survey. III. (Heiles+, 2004)

    NASA Astrophysics Data System (ADS)

    Heiles, C.; Troland, T. H.

    2004-05-01

    We outline the theory and practice of measuring the four Stokes parameters of spectral lines in emission/absorption observations. We apply these concepts to our Arecibo H I absorption line data and present the results. We include a detailed discussion of instrumental effects arising from polarized beam structure and its interaction with the spatially extended emission line structure. At Arecibo, linear polarization [Stokes (Q,U)] has much larger instrumental effects than circular (Stokes V). We show how to reduce the instrumental contributions to V and to evaluate upper limits to its remaining instrumental errors by using the (Q,U) results. These efforts work well for opacity spectra but not for emission spectra. Arecibo's large central blockage exacerbates these effects, particularly for emission profiles, and other telescopes with weaker sidelobes are not as susceptible. We present graphical results for 41 sources; we analyze these absorption spectra in terms of Gaussian components, which number 136, and present physical parameters including magnetic field for each. (1 data file).

  10. Adding Context to James Webb Space Telescope Surveys with Current and Future 21 cm Radio Observations

    NASA Astrophysics Data System (ADS)

    Beardsley, A. P.; Morales, M. F.; Lidz, A.; Malloy, M.; Sutter, P. M.

    2015-02-01

    Infrared and radio observations of the Epoch of Reionization promise to revolutionize our understanding of the cosmic dawn, and major efforts with the JWST, MWA, and HERA are underway. While measurements of the ionizing sources with infrared telescopes and the effect of these sources on the intergalactic medium with radio telescopes should be complementary, to date the wildly disparate angular resolutions and survey speeds have made connecting proposed observations difficult. In this paper we develop a method to bridge the gap between radio and infrared studies. While the radio images may not have the sensitivity and resolution to identify individual bubbles with high fidelity, by leveraging knowledge of the measured power spectrum we are able to separate regions that are likely ionized from largely neutral, providing context for the JWST observations of galaxy counts and properties in each. By providing the ionization context for infrared galaxy observations, this method can significantly enhance the science returns of JWST and other infrared observations.

  11. How Ewen and Purcell discovered the 21-cm interstellar hydrogen line.

    NASA Astrophysics Data System (ADS)

    Stephan, K. D.

    1999-02-01

    The story of how Harold Irving Ewen and Edward Mills Purcell detected the first spectral line ever observed in radio astronomy, in 1951, has been told for general audiences by Robert Buderi (1996). The present article has a different purpose. The technical roots of Ewen and Purcell's achievement reveal much about the way science often depends upon "borrowed" technologies, which were not developed with the needs of science in mind. The design and construction of the equipment is described in detail. As Ewen's photographs, records, and recollections show, he and Purcell had access to an unusual combination of scientific knowledge, engineering know-how, critical hardware, and technical assistance at Harvard, in 1950 and 1951. This combination gave them a competitive edge over similar research groups in Holland and Australia, who were also striving to detect the hydrogen line, and who succeeded only weeks after the Harvard researchers did. The story also shows that Ewen and Purcell did their groundbreaking scientific work in the "small-science" style that prevailed before World War II, while receiving substantial indirect help from one of the first big-science projects at Harvard.

  12. Will nonlinear peculiar velocity and inhomogeneous reionization spoil 21 cm cosmology from the epoch of reionization?

    PubMed

    Shapiro, Paul R; Mao, Yi; Iliev, Ilian T; Mellema, Garrelt; Datta, Kanan K; Ahn, Kyungjin; Koda, Jun

    2013-04-12

    The 21 cm background from the epoch of reionization is a promising cosmological probe: line-of-sight velocity fluctuations distort redshift, so brightness fluctuations in Fourier space depend upon angle, which linear theory shows can separate cosmological from astrophysical information. Nonlinear fluctuations in ionization, density, and velocity change this, however. The validity and accuracy of the separation scheme are tested here for the first time, by detailed reionization simulations. The scheme works reasonably well early in reionization (≲40% ionized), but not late (≳80% ionized).

  13. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  14. Mapping Van

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A NASA Center for the Commercial Development of Space (CCDS) - developed system for satellite mapping has been commercialized for the first time. Global Visions, Inc. maps an area while driving along a road in a sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. Data is fed into a computerized geographic information system (GIS). The resulting amps can be used for tax assessment purposes, emergency dispatch vehicles and fleet delivery companies as well as other applications.

  15. An intensity scale for riverine flooding

    USGS Publications Warehouse

    Fulford, J.M.

    2004-01-01

    Recent advances in the availability and accuracy of multi-dimensional flow models, the advent of precise elevation data for floodplains (LIDAR), and geographical GIS allow the creation of hazard maps that more correctly reflect the varying levels of flood-damage risk across a floodplain when inundatecby floodwaters. Using intensity scales for wind damages, an equivalent water-damage flow intensity scale has been developed that ranges from 1 (minimal effects) to 10 (major damages to most structures). This flow intensity scale, FIS, is portrayed on a map as color-coded areas of increasing flow intensity. This should prove to be a valuable tool to assess relative risk to people and property in known flood-hazard areas.

  16. Question Mapping

    ERIC Educational Resources Information Center

    Martin, Josh

    2012-01-01

    After accepting the principal position at Farmersville (TX) Junior High, the author decided to increase instructional rigor through question mapping because of the success he saw using this instructional practice at his prior campus. Teachers are the number one influence on student achievement (Marzano, 2003), so question mapping provides a…

  17. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  18. Map Adventures.

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet about maps, with seven accompanying lessons, is appropriate for students in grades K-3. Students learn basic concepts for visualizing objects from different perspectives and how to understand and use maps. Lessons in the packet center on a story about a little girl, Nikki, who rides in a hot-air balloon that gives her, and…

  19. A symbiotic approach to SETI observations: use of maps from the Westerbork Synthesis Radio Telescope

    NASA Technical Reports Server (NTRS)

    Tarter, J. C.; Israel, F. P.

    1982-01-01

    High spatial resolution continuum radio maps produced by the Westerbork Synthesis Radio Telescope (WSRT) of The Netherlands at frequencies near the 21 cm HI line have been examined for anomalous sources of emmission coincident with the locations of nearby bright stars. From a total of 542 stellar positions investigated, no candidates for radio stars or ETI signals were discovered to formal limits on the minimum detectable signal ranging from 7.7 x 10(-22) W/m2 to 6.4 x 10(-24) W/m2. This preliminary study has verified that data collected by radio astronomers at large synthesis arrays can profitably be analysed for SETI signals (in a non-interfering manner) provided only that the data are available in the form of a more or less standard two dimensional map format.

  20. A symbiotic approach to SETI observations: use of maps from the Westerbork Synthesis Radio Telescope.

    PubMed

    Tarter, J C; Israel, F P

    1982-01-01

    High spatial resolution continuum radio maps produced by the Westerbork Synthesis Radio Telescope (WSRT) of The Netherlands at frequencies near the 21 cm HI line have been examined for anomalous sources of emmission coincident with the locations of nearby bright stars. From a total of 542 stellar positions investigated, no candidates for radio stars or ETI signals were discovered to formal limits on the minimum detectable signal ranging from 7.7 x 10(-22) W/m2 to 6.4 x 10(-24) W/m2. This preliminary study has verified that data collected by radio astronomers at large synthesis arrays can profitably be analysed for SETI signals (in a non-interfering manner) provided only that the data are available in the form of a more or less standard two dimensional map format.

  1. Mapping racism.

    PubMed

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  2. Mapping Biodiversity.

    ERIC Educational Resources Information Center

    World Wildlife Fund, Washington, DC.

    This document features a lesson plan that examines how maps help scientists protect biodiversity and how plants and animals are adapted to specific ecoregions by comparing biome, ecoregion, and habitat. Samples of instruction and assessment are included. (KHR)

  3. Planetary Mapping

    NASA Astrophysics Data System (ADS)

    Greeley, Ronald; Batson, Raymond M.

    2007-02-01

    Preface; List of contributors; 1. Introduction R. Greeley and R. M. Batson; 2. History of planetary cartography R. M. Batson, E. A. Whitaker and D. E. Wilhelms; 3. Cartography R. M. Batson; 4. Planetary nomenclature M. E. Strobell and H. Masursky; 5. Geodetic control M. E. Davies; 6. Topographic mapping S. S. C. Wu and F. J. Doyle; 7. Geologic mapping D. E. Wilhelms; Appendices R. M. Batson and J. L. Inge; Index.

  4. Map Separates

    USGS Publications Warehouse

    ,

    2001-01-01

    U.S. Geological Survey (USGS) topographic maps are printed using up to six colors (black, blue, green, red, brown, and purple). To prepare your own maps or artwork based on maps, you can order separate black-and-white film positives or negatives for any color printed on a USGS topographic map, or for one or more of the groups of related features printed in the same color on the map (such as drainage and drainage names from the blue plate.) In this document, examples are shown with appropriate ink color to illustrate the various separates. When purchased, separates are black-and-white film negatives or positives. After you receive a film separate or composite from the USGS, you can crop, enlarge or reduce, and edit to add or remove details to suit your special needs. For example, you can adapt the separates for making regional and local planning maps or for doing many kinds of studies or promotions by using the features you select and then printing them in colors of your choice.

  5. Venus mapping

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Morgan, H. F.; Sucharski, Robert

    1991-01-01

    Semicontrolled image mosaics of Venus, based on Magellan data, are being compiled at 1:50,000,000, 1:10,000,000, 1:5,000,000, and 1:1,000,000 scales to support the Magellan Radar Investigator (RADIG) team. The mosaics are semicontrolled in the sense that data gaps were not filled and significant cosmetic inconsistencies exist. Contours are based on preliminary radar altimetry data that is subjected to revision and improvement. Final maps to support geologic mapping and other scientific investigations, to be compiled as the dataset becomes complete, will be sponsored by the Planetary Geology and Geophysics Program and/or the Venus Data Analysis Program. All maps, both semicontrolled and final, will be published as I-maps by the United States Geological Survey. All of the mapping is based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control on planets where framing cameras were used, is not feasible with the radar images of Venus, although an eventual shift of coordinate system to a revised spin-axis location is anticipated. This is expected to be small enough that it will affect only large-scale maps.

  6. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps.

  7. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. PMID:26243476

  8. Mole Mapping.

    ERIC Educational Resources Information Center

    Crippen, Kent J.; Curtright, Robert D.; Brooks, David W.

    2000-01-01

    The abstract nature of the mole and its applications to problem solving make learning the concept difficult for students, and teaching the concept challenging for teachers. Presents activities that use concept maps and graphing calculators as tools for solving mole problems. (ASK)

  9. Memphis Maps.

    ERIC Educational Resources Information Center

    Hyland, Stanley; Cox, David; Martin, Cindy

    1998-01-01

    The Memphis Maps program, a collaborative effort of Memphis (Tennessee) educational institutions, public agencies, a bank, and community programs, trains local students in Geographic Information Systems technology and provides the community with valuable demographic and assessment information. The program is described, and factors contributing to…

  10. Intensive Versus Non-Intensive Arabic.

    ERIC Educational Resources Information Center

    Hanna, Sami A.

    This paper investigates the difference in achievement among 20 University of Utah students of modern standard Arabic. One group of 11 students followed an intensive eight-week summer course, and a second group of nine students studied the same course during a regular academic year. Also reported on is the correlation between achievement and…

  11. Map projections

    USGS Publications Warehouse

    ,

    1993-01-01

    A map projection is used to portray all or part of the round Earth on a flat surface. This cannot be done without some distortion. Every projection has its own set of advantages and disadvantages. There is no "best" projection. The mapmaker must select the one best suited to the needs, reducing distortion of the most important features. Mapmakers and mathematicians have devised almost limitless ways to project the image of the globe onto paper. Scientists at the U. S. Geological Survey have designed projections for their specific needs—such as the Space Oblique Mercator, which allows mapping from satellites with little or no distortion. This document gives the key properties, characteristics, and preferred uses of many historically important projections and of those frequently used by mapmakers today.

  12. Intensive Care, Intense Conflict: A Balanced Approach.

    PubMed

    Paquette, Erin Talati; Kolaitis, Irini N

    2015-01-01

    Caring for a child in a pediatric intensive care unit is emotionally and physically challenging and often leads to conflict. Skilled mediators may not always be available to aid in conflict resolution. Careproviders at all levels of training are responsible for managing difficult conversations with families and can often prevent escalation of conflict. Bioethics mediators have acknowledged the important contribution of mediation training in improving clinicians' skills in conflict management. Familiarizing careproviders with basic mediation techniques is an important step towards preventing escalation of conflict. While training in effective communication is crucial, a sense of fairness and justice that may only come with the introduction of a skilled, neutral third party is equally important. For intense conflict, we advocate for early recognition, comfort, and preparedness through training of clinicians in de-escalation and optimal communication, along with the use of more formally trained third-party mediators, as required.

  13. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-01-01

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  14. Light intensity compressor

    DOEpatents

    Rushford, Michael C.

    1990-02-06

    In a system for recording images having vastly differing light intensities over the face of the image, a light intensity compressor is provided that utilizes the properties of twisted nematic liquid crystals to compress the image intensity. A photoconductor or photodiode material that is responsive to the wavelength of radiation being recorded is placed adjacent a layer of twisted nematic liquid crystal material. An electric potential applied to a pair of electrodes that are disposed outside of the liquid crystal/photoconductor arrangement to provide an electric field in the vicinity of the liquid crystal material. The electrodes are substantially transparent to the form of radiation being recorded. A pair of crossed polarizers are provided on opposite sides of the liquid crystal. The front polarizer linearly polarizes the light, while the back polarizer cooperates with the front polarizer and the liquid crystal material to compress the intensity of a viewed scene. Light incident upon the intensity compressor activates the photoconductor in proportion to the intensity of the light, thereby varying the field applied to the liquid crystal. The increased field causes the liquid crystal to have less of a twisting effect on the incident linearly polarized light, which will cause an increased percentage of the light to be absorbed by the back polarizer. The intensity of an image may be compressed by forming an image on the light intensity compressor.

  15. Generalized phase diffraction gratings with tailored intensity.

    PubMed

    Albero, Jorge; Moreno, Ignacio; Davis, Jeffrey A; Cottrell, Don M; Sand, David

    2012-10-15

    We report the generation of continuous phase masks designed to generate a set of target diffraction orders with defined relative intensity weights. We apply a previously reported analytic calculation that requires resolving a single equation with a set of parameters defining the target diffraction orders. Then the same phase map is extended to other phase patterns such as vortex generating/sensing gratings. Results are demonstrated experimentally with a parallel-aligned spatial light modulator.

  16. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  17. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  18. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  19. Seismicity map of the state of Georgia

    USGS Publications Warehouse

    Reagor, B. Glen; Stover, C.W.; Algermissen, S.T.; Long, L.T.

    1991-01-01

    This map is one of a series of seismicity maps produced by the U.S. Geological Survey that show earthquake data of individual states or groups of states at the scale of 1:1,000,000. This maps shows only those earthquakes with epicenters located within the boundaries of Georgia, even though earthquakes in nearby states or countries may have been felt or may have cause damage in Georgia. The data in table 1 were used to compile the seismicity map; these data are a corrected, expanded, and updated (through 1987) version of the data used by Algermissen (1969) for a study of seismic risk in the United States. The locations and intensities of some earthquakes were revised and intensities were assigned where none had been before. Many earthquakes were added to the original list from new data sources as well as from some old data sources that has not been previously used. The data in table 1 represent best estimates of the location of the epicenter, magnitude, and intensity of each earthquake on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed, but not all, especially for earthquakes that occurred before seismic instruments were universally used. The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the Arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercoili intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location, The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  20. The National Map: from geography to mapping and back again

    USGS Publications Warehouse

    Kelmelis, John A.; DeMulder, Mark L.; Ogrosky, Charles E.; Van Driel, J. Nicholas; Ryan, Barbara J.

    2003-01-01

    When the means of production for national base mapping were capital intensive, required large production facilities, and had ill-defined markets, Federal Government mapping agencies were the primary providers of the spatial data needed for economic development, environmental management, and national defense. With desktop geographic information systems now ubiquitous, source data available as a commodity from private industry, and the realization that many complex problems faced by society need far more and different kinds of spatial data for their solutions, national mapping organizations must realign their business strategies to meet growing demand and anticipate the needs of a rapidly changing geographic information environment. The National Map of the United States builds on a sound historic foundation of describing and monitoring the land surface and adds a focused effort to produce improved understanding, modeling, and prediction of land-surface change. These added dimensions bring to bear a broader spectrum of geographic science to address extant and emerging issues. Within the overarching construct of The National Map, the U.S. Geological Survey (USGS) is making a transition from data collector to guarantor of national data completeness; from producing paper maps to supporting an online, seamless, integrated database; and from simply describing the Nation’s landscape to linking these descriptions with increased scientific understanding. Implementing the full spectrum of geographic science addresses a myriad of public policy issues, including land and natural resource management, recreation, urban growth, human health, and emergency planning, response, and recovery. Neither these issues nor the science and technologies needed to deal with them are static. A robust research agenda is needed to understand these changes and realize The National Map vision. Initial successes have been achieved. These accomplishments demonstrate the utility of

  1. Intensity encoding in unsupervised neural nets.

    PubMed

    Parkinson, Alan M.; Parpia, Dawood Y.

    1998-06-01

    The requirement of input vector normalisation in unsupervised neural nets results in a loss of information about the intensity of the signal contained in the input datastream. We show through a simple algebraic analysis that the introduction of an additional input channel encoding the root-mean-square intensity in the signals cannot restore this information if the input vectors have to be, nevertheless, all of the same length. We suggest an alternative method of encoding the input vectors where each of the input channels is split into two components in such a way that the resultant input vector is then of fixed length and retains information of the intensity in the signals. We further demonstrate, by using synthetic data, that a Kohonen Net is capable of forming topological maps of signals of different intensity, where an adjacency relationship is maintained both among the signals of the same frequency composition at different intensities and between signals of different frequency compositions at the same intensity. A second experiment reported here shows the same behaviour for less artificial inputs (based on a cochlear model) and additionally demonstrates that the trained network can respond appropriately to signals not previously encountered.

  2. Approach to standardizing MR image intensity scale

    NASA Astrophysics Data System (ADS)

    Nyul, Laszlo G.; Udupa, Jayaram K.

    1999-05-01

    Despite the many advantages of MR images, they lack a standard image intensity scale. MR image intensity ranges and the meaning of intensity values vary even for the same protocol (P) and the same body region (D). This causes many difficulties in image display and analysis. We propose a two-step method for standardizing the intensity scale in such a way that for the same P and D, similar intensities will have similar meanings. In the first step, the parameters of the standardizing transformation are 'learned' from an image set. In the second step, for each MR study, these parameters are used to map their histogram into the standardized histogram. The method was tested quantitatively on 90 whole brain FSE T2, PD and T1 studies of MS patients and qualitatively on several other SE PD, T2 and SPGR studies of the grain and foot. Measurements using mean squared difference showed that the standardized image intensities have statistically significantly more consistent range and meaning than the originals. Fixed windows can be established for standardized imags and used for display without the need of per case adjustment. Preliminary results also indicate that the method facilitates improving the degree of automation of image segmentation.

  3. High intensity neutrino beams

    SciTech Connect

    Ichikawa, A. K.

    2015-07-15

    High-intensity proton accelerator complex enabled long baseline neutrino oscillation experiments with a precisely controlled neutrino beam. The beam power so far achieved is a few hundred kW with enourmorous efforts of accelerator physicists and engineers. However, to fully understand the lepton mixing structure, MW-class accelerators are desired. We describe the current intensity-frontier high-energy proton accelerators, their plans to go beyond and technical challenges in the neutrino beamline facilities.

  4. Rainfall intensity-duration conditions for mass movements in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Chi-Wen; Saito, Hitoshi; Oguchi, Takashi

    2015-12-01

    Mass movements caused by rainfall events in Taiwan are analyzed during a 7-year period from 2006 to 2012. Data from the Taiwan Soil and Water Conservation Bureau reports were compiled for 263 mass movement events, including 156 landslides, 91 debris flows, and 16 events with both landslides and debris flows. Rainfall totals for each site location were obtained from interpolated rain gauge data. The rainfall intensity-duration ( I-D) relationship was examined to establish a rainfall threshold for mass movements using random sampling: I = 18.10(±2.67) D -0.17(±0.04), where I is mean rainfall intensity (mm/h) and D is the time (h) between the beginning of a rainfall event and the resulting mass movement. Significant differences were found between rainfall intensities and thresholds for landslides and debris flows. For short-duration rainfall events, higher mean rainfall intensities were required to trigger debris flows. In contrast, for long-duration rainfall events, similar mean rainfall intensities triggered both landslides and debris flows. Mean rainfall intensity was rescaled by mean annual precipitation (MAP) to define a new threshold: I MAP = 0.0060(±0.0009) D -0.17(±0.04), where I MAP is rescaled rainfall intensity and MAP is the minimum for mountainous areas in Taiwan (3000 mm). Although the I-D threshold for Taiwan is high, the I MAP -D threshold for Taiwan tends to be low relative to other areas around the world. Our results indicate that Taiwan is highly prone to rainfall-induced mass movements. This study also shows that most mass movements occur in high rainfall-intensity periods, but some events occur before or after the rainfall peak. Both antecedent and peak rainfall play important roles in triggering landslides, whereas debris flow occurrence is more related to peak rainfall than antecedent rainfall.

  5. Defect mapping system

    DOEpatents

    Sopori, B.L.

    1995-04-11

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities. 20 figures.

  6. Mapping contigs using CONTIGuator.

    PubMed

    Galardini, Marco; Mengoni, Alessio; Bazzicalupo, Marco

    2015-01-01

    Obtaining bacterial genomic sequences has become a routine task in today's biology. The emergence of the comparative genomics approach has led to an increasing number of bacterial species having more than one strain sequenced, thus facilitating the annotation process. On the other hand, many genomic sequences are now left in the "draft" status, as a series of contigs, mainly for the labor-intensive finishing task. As a result, many genomic analyses are incomplete (e.g., in their annotation) or impossible to be performed (e.g., structural genomics analysis). Many approaches have been recently developed to facilitate the finishing process or at least to produce higher quality scaffolds; taking advantage of the comparative genomics paradigm, closely related genomes are used to align the contigs and determine their relative orientation, thus facilitating the finishing process, but also producing higher quality scaffolds. In this chapter we present the use of the CONTIGuator algorithm, which aligns the contigs from a draft genome to a closely related closed genome and resolves their relative orientation based on this alignment, producing a scaffold and a series of PCR primer pairs for the finishing process. The CONTIGuator algorithm is also capable of handling multipartite genomes (i.e., genomes having chromosomes and other plasmids), univocally mapping contigs to the most similar replicon. The program also produces a series of contig maps that allow to perform structural genomics analysis on the draft genome. The functionalities of the web interface, as well as the command line version, are presented.

  7. Defect mapping system

    DOEpatents

    Sopori, Bhushan L.

    1995-01-01

    Apparatus for detecting and mapping defects in the surfaces of polycrystalline materials in a manner that distinguishes dislocation pits from grain boundaries includes a laser for illuminating a wide spot on the surface of the material, a light integrating sphere with apertures for capturing light scattered by etched dislocation pits in an intermediate range away from specular reflection while allowing light scattered by etched grain boundaries in a near range from specular reflection to pass through, and optical detection devices for detecting and measuring intensities of the respective intermediate scattered light and near specular scattered light. A center blocking aperture or filter can be used to screen out specular reflected light, which would be reflected by nondefect portions of the polycrystalline material surface. An X-Y translation stage for mounting the polycrystalline material and signal processing and computer equipment accommodate rastor mapping, recording, and displaying of respective dislocation and grain boundary defect densities. A special etch procedure is included, which prepares the polycrystalline material surface to produce distinguishable intermediate and near specular light scattering in patterns that have statistical relevance to the dislocation and grain boundary defect densities.

  8. Diffuse gamma radiation. [intensity, energy spectrum and spatial distribution from SAS 2 observations

    NASA Technical Reports Server (NTRS)

    Fichtel, C. E.; Simpson, G. A.; Thompson, D. J.

    1978-01-01

    Results are reported for an investigation of the intensity, energy spectrum, and spatial distribution of the diffuse gamma radiation detected by SAS 2 away from the galactic plane in the energy range above 35 MeV. The gamma-ray data are compared with relevant data obtained at other wavelengths, including 21-cm emission, radio continuum radiation, and the limited UV and radio information on local molecular hydrogen. It is found that there are two quite distinct components to the diffuse radiation, one of which shows a good correlation with the galactic matter distribution and continuum radiation, while the other has a much steeper energy spectrum and appears to be isotropic at least on a coarse scale. The galactic component is interpreted in terms of its implications for both local and more distant regions of the Galaxy. The apparently isotropic radiation is discussed partly with regard to the constraints placed on possible models by the steep energy spectrum, the observed intensity, and an upper limit on the anisotropy.

  9. Strongly intensive quantities

    SciTech Connect

    Gorenstein, M. I.; Gazdzicki, M.

    2011-07-15

    Analysis of fluctuations of hadron production properties in collisions of relativistic particles profits from use of measurable intensive quantities which are independent of system size variations. The first family of such quantities was proposed in 1992; another is introduced in this paper. Furthermore we present a proof of independence of volume fluctuations for quantities from both families within the framework of the grand canonical ensemble. These quantities are referred to as strongly intensive ones. Influence of conservation laws and resonance decays is also discussed.

  10. Seismicity map of the state of Indiana

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  11. Seismicity map of the state of Ohio

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted . These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  12. Seismicity map of the state of Vermont

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Highland, L.M.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the 1eft of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  13. Seismicity map of the state of Arizona

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1986-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. The Roman numeral to the left of the triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of al 1 earthquakes with epicenters at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  14. Seismicity map of the State of Montana

    USGS Publications Warehouse

    Reagor, B.G.; Stover, C.W.; Algermissen, S.T.

    1985-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  15. Seismicity map of the state of Idaho

    USGS Publications Warehouse

    Stover, Carl W.; Reagor, B.G.; Algermissen, S.T.

    1991-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the Arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of all earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  16. Human Mind Maps

    ERIC Educational Resources Information Center

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  17. INTERPRETING THE UNRESOLVED INTENSITY OF COSMOLOGICALLY REDSHIFTED LINE RADIATION

    SciTech Connect

    Switzer, E. R.; Chang, T.-C.; Pen, U.-L.; Voytek, T. C.

    2015-12-10

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically ∼10{sup 2}–10{sup 3} times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of foregrounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  18. Interpreting The Unresolved Intensity Of Cosmologically Redshifted Line Radiation

    NASA Technical Reports Server (NTRS)

    Switzer, E. R.; Chang, T.-C.; Masui, K. W.; Pen, U.-L.; Voytek, T. C.

    2016-01-01

    Intensity mapping experiments survey the spectrum of diffuse line radiation rather than detect individual objects at high signal-to-noise ratio. Spectral maps of unresolved atomic and molecular line radiation contain three-dimensional information about the density and environments of emitting gas and efficiently probe cosmological volumes out to high redshift. Intensity mapping survey volumes also contain all other sources of radiation at the frequencies of interest. Continuum foregrounds are typically approximately 10(sup 2)-10(Sup 3) times brighter than the cosmological signal. The instrumental response to bright foregrounds will produce new spectral degrees of freedom that are not known in advance, nor necessarily spectrally smooth. The intrinsic spectra of fore-grounds may also not be well known in advance. We describe a general class of quadratic estimators to analyze data from single-dish intensity mapping experiments and determine contaminated spectral modes from the data themselves. The key attribute of foregrounds is not that they are spectrally smooth, but instead that they have fewer bright spectral degrees of freedom than the cosmological signal. Spurious correlations between the signal and foregrounds produce additional bias. Compensation for signal attenuation must estimate and correct this bias. A successful intensity mapping experiment will control instrumental systematics that spread variance into new modes, and it must observe a large enough volume that contaminant modes can be determined independently from the signal on scales of interest.

  19. Binge Drinking Intensity

    PubMed Central

    Esser, Marissa B.; Kanny, Dafna; Brewer, Robert D.; Naimi, Timothy S.

    2015-01-01

    Background Binge drinking (≥4 drinks for women; ≥5 drinks for men, per occasion) is responsible for more than half of the estimated 80,000 U.S. deaths annually and three-quarters of the $223.5 billion in costs in 2006. Binge drinking prevalence is assessed more commonly than binge drinking intensity (i.e., number of drinks consumed per binge episode). Risk of binge drinking–related harm increases with intensity, and thus it is important to monitor. The largest number of drinks consumed is assessed in health surveys, but its usefulness for assessing binge intensity is unknown. Purpose To assess the agreement between two potential measures of binge drinking intensity: the largest number of drinks consumed by binge drinkers (maximum-drinks) and the total number of drinks consumed during their most recent binge episode (drinks-per-binge). Methods Data were analyzed from 7909 adult binge drinkers from 14 states responding to the 2008 Behavioral Risk Factor Surveillance System (BRFSS) binge drinking module. Mean and median drinks-per-binge from that module were compared to mean and median maximum-drinks. Analyses were conducted in 2010–2011. Results Mean (8.2) and median (5.9) maximum-drinks were strongly correlated with mean (7.4) and median (5.4) drinks-per-binge (r=0.57). These measures were also strongly correlated across most sociodemographic and drinking categories overall and within states. Conclusions The maximum-drinks consumed by binge drinkers is a practical method for assessing binge drinking intensity and thus can be used to plan and evaluate Community Guide–recommended strategies for preventing binge drinking (e.g., increasing the price of alcoholic beverages and regulating alcohol outlet density). PMID:22608381

  20. Concept Mapping

    PubMed Central

    Brennan, Laura K.; Brownson, Ross C.; Kelly, Cheryl; Ivey, Melissa K.; Leviton, Laura C.

    2016-01-01

    Background From 2003 to 2008, 25 cross-sector, multidisciplinary community partnerships funded through the Active Living by Design (ALbD) national program designed, planned, and implemented policy and environmental changes, with complementary programs and promotions. This paper describes the use of concept-mapping methods to gain insights into promising active living intervention strategies based on the collective experience of community representatives implementing ALbD initiatives. Methods Using Concept Systems software, community representatives (n=43) anonymously generated actions and changes in their communities to support active living (183 original statements, 79 condensed statements). Next, respondents (n=26, from 23 partnerships) sorted the 79 statements into self-created categories, or active living intervention approaches. Respondents then rated statements based on their perceptions of the most important strategies for creating community changes (n=25, from 22 partnerships) and increasing community rates of physical activity (n=23, from 20 partnerships). Cluster analysis and multidimensional scaling were used to describe data patterns. Results ALbD community partnerships identified three active living intervention approaches with the greatest perceived importance to create community change and increase population levels of physical activity: changes to the built and natural environment, partnership and collaboration efforts, and land-use and transportation policies. The relative importance of intervention approaches varied according to subgroups of partnerships working with different populations. Conclusions Decision makers, practitioners, and community residents can incorporate what has been learned from the 25 community partnerships to prioritize active living policy, physical project, promotional, and programmatic strategies for work in different populations and settings. PMID:23079266

  1. Maps & minds : mapping through the ages

    USGS Publications Warehouse

    ,

    1984-01-01

    Throughout time, maps have expressed our understanding of our world. Human affairs have been influenced strongly by the quality of maps available to us at the major turning points in our history. "Maps & Minds" traces the ebb and flow of a few central ideas in the mainstream of mapping. Our expanding knowledge of our cosmic neighborhood stems largely from a small number of simple but grand ideas, vigorously pursued.

  2. Variable Sampling Mapping

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey, S.; Aronstein, David L.; Dean, Bruce H.; Lyon, Richard G.

    2012-01-01

    The performance of an optical system (for example, a telescope) is limited by the misalignments and manufacturing imperfections of the optical elements in the system. The impact of these misalignments and imperfections can be quantified by the phase variations imparted on light traveling through the system. Phase retrieval is a methodology for determining these variations. Phase retrieval uses images taken with the optical system and using a light source of known shape and characteristics. Unlike interferometric methods, which require an optical reference for comparison, and unlike Shack-Hartmann wavefront sensors that require special optical hardware at the optical system's exit pupil, phase retrieval is an in situ, image-based method for determining the phase variations of light at the system s exit pupil. Phase retrieval can be used both as an optical metrology tool (during fabrication of optical surfaces and assembly of optical systems) and as a sensor used in active, closed-loop control of an optical system, to optimize performance. One class of phase-retrieval algorithms is the iterative transform algorithm (ITA). ITAs estimate the phase variations by iteratively enforcing known constraints in the exit pupil and at the detector, determined from modeled or measured data. The Variable Sampling Mapping (VSM) technique is a new method for enforcing these constraints in ITAs. VSM is an open framework for addressing a wide range of issues that have previously been considered detrimental to high-accuracy phase retrieval, including undersampled images, broadband illumination, images taken at or near best focus, chromatic aberrations, jitter or vibration of the optical system or detector, and dead or noisy detector pixels. The VSM is a model-to-data mapping procedure. In VSM, fully sampled electric fields at multiple wavelengths are modeled inside the phase-retrieval algorithm, and then these fields are mapped to intensities on the light detector, using the properties

  3. Mapping: A Course.

    ERIC Educational Resources Information Center

    Whitmore, Paul M.

    1988-01-01

    Reviews the history of cartography. Describes the contributions of Strabo and Ptolemy in early maps. Identifies the work of Gerhard Mercator as the most important advancement in mapping. Discusses present mapping standards from history. (CW)

  4. Saliency Mapping Enhanced by Structure Tensor

    PubMed Central

    He, Zhiyong; Chen, Xin; Sun, Lining

    2015-01-01

    We propose a novel efficient algorithm for computing visual saliency, which is based on the computation architecture of Itti model. As one of well-known bottom-up visual saliency models, Itti method evaluates three low-level features, color, intensity, and orientation, and then generates multiscale activation maps. Finally, a saliency map is aggregated with multiscale fusion. In our method, the orientation feature is replaced by edge and corner features extracted by a linear structure tensor. Following it, these features are used to generate contour activation map, and then all activation maps are directly combined into a saliency map. Compared to Itti method, our method is more computationally efficient because structure tensor is more computationally efficient than Gabor filter that is used to compute the orientation feature and our aggregation is a direct method instead of the multiscale operator. Experiments on Bruce's dataset show that our method is a strong contender for the state of the art. PMID:26788050

  5. Mapping with the Masses: Google Map Maker

    NASA Astrophysics Data System (ADS)

    Pfund, J.

    2008-12-01

    After some 15,000 years of map making, which saw the innovations of cardinal directions, map projections for a spherical earth, and GIS analysis, many parts of the world still appear as the "Dark Continent" on modern maps. Google Map Maker intends to shine a light on these areas by tapping into the power of the GeoWeb. Google Map Maker is a website which allows you to collaborate with others on one unified map to add, edit, locate, describe, and moderate map features, such as roads, cities, businesses, parks, schools and more, for certain regions of the world using Google Maps imagery. In this session, we will show some examples of how people are mapping with this powerful tool as well as what they are doing with the data. With Google Map Maker, you can become a citizen cartographer and join the global network of users helping to improve the quality of maps and local information in your region of interest. You are invited to map the world with us!

  6. Intense fusion neutron sources

    NASA Astrophysics Data System (ADS)

    Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.

    2010-04-01

    The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 1015-1021 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 1020 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.

  7. NEUTRON FLUX INTENSITY DETECTION

    DOEpatents

    Russell, J.T.

    1964-04-21

    A method of measuring the instantaneous intensity of neutron flux in the core of a nuclear reactor is described. A target gas capable of being transmuted by neutron bombardment to a product having a resonance absorption line nt a particular microwave frequency is passed through the core of the reactor. Frequency-modulated microwave energy is passed through the target gas and the attenuation of the energy due to the formation of the transmuted product is measured. (AEC)

  8. Intense ion beam generator

    DOEpatents

    Humphries, Jr., Stanley; Sudan, Ravindra N.

    1977-08-30

    Methods and apparatus for producing intense megavolt ion beams are disclosed. In one embodiment, a reflex triode-type pulsed ion accelerator is described which produces ion pulses of more than 5 kiloamperes current with a peak energy of 3 MeV. In other embodiments, the device is constructed so as to focus the beam of ions for high concentration and ease of extraction, and magnetic insulation is provided to increase the efficiency of operation.

  9. Water intensity of transportation.

    PubMed

    King, Carey W; Webber, Michael E

    2008-11-01

    As the need for alternative transportation fuels increases, it is important to understand the many effects of introducing fuels based upon feedstocks other than petroleum. Water intensity in "gallons of water per mile traveled" is one method to measure these effects on the consumer level. In this paper we investigate the water intensity for light duty vehicle (LDV) travel using selected fuels based upon petroleum, natural gas, unconventional fossil fuels, hydrogen, electricity, and two biofuels (ethanol from corn and biodiesel from soy). Fuels more directly derived from fossil fuels are less water intensive than those derived either indirectly from fossil fuels (e.g., through electricity generation) or directly from biomass. The lowest water consumptive (<0.15 gal H20/mile) and withdrawal (<1 gal H2O/mile) rates are for LDVs using conventional petroleum-based gasoline and diesel, nonirrigated biofuels, hydrogen derived from methane or electrolysis via nonthermal renewable electricity, and electricity derived from nonthermal renewable sources. LDVs running on electricity and hydrogen derived from the aggregate U.S. grid (heavily based upon fossil fuel and nuclear steam-electric power generation) withdraw 5-20 times and consume nearly 2-5 times more water than by using petroleum gasoline. The water intensities (gal H20/mile) of LDVs operating on biofuels derived from crops irrigated in the United States at average rates is 28 and 36 for corn ethanol (E85) for consumption and withdrawal, respectively. For soy-derived biodiesel the average consumption and withdrawal rates are 8 and 10 gal H2O/mile. PMID:19031873

  10. Measurement of Itch Intensity.

    PubMed

    Reich, Adam; Szepietowski, Jacek C

    2016-01-01

    Measurement of itch intensity is essential to properly evaluate pruritic disease severity, to understand the patients' needs and burden, and especially to assess treatment efficacy, particularly in clinical trials. However, measurement of itch remains a challenge, as, per definition, it is a subjective sensation and assessment of this symptom represents significant difficulty. Intensity of itch must be considered in relation to its duration, localization, course of symptoms, presence and type of scratch lesions, response to antipruritic treatment, and quality of life impairment. Importantly, perception of itch may also be confounded by different cofactors including but not limited to patient general condition and other coexisting ailments. In the current chapter we characterize the major methods of itch assessments that are used in daily clinical life and as research tools. Different methods of itch assessment have been developed; however, so far none is without limitations and any data on itch intensity should always be interpreted with caution. Despite these limitations, it is strongly recommended to implement itch measurement tools in routine daily practice, as it would help in proper assessment of patient clinical status. In order to improve evaluation of itch in research studies, it is recommended to use at least two independent methods, as such an approach should increase the validity of achieved results. PMID:27578068

  11. Intense near-infrared emission of 1.23 μm in erbium-doped low-phonon-energy fluorotellurite glass.

    PubMed

    Zhou, Bo; Tao, Lili; Chan, Clarence Yat-Yin; Tsang, Yuen Hong; Jin, Wei; Pun, Edwin Yue-Bun

    2013-07-01

    Intense near-infrared emission located at 1.23 μm wavelength originating from the erbium (Er(3+)):(4)S3/2→(4)I11/2 transition is observed in Er(3+)-doped fluorotellurite glasses. This emission is mainly contributed by the relatively low phonon energy of the fluorotellurite glass host (~776 cm(-1)). Judd-Ofelt analysis indicates a strong asymmetry and covalent environment between Er(3+) ions and ligands in the host matrix. The emission cross-section was calculated to be 2.85×10(-21) cm(2) by the Füchtbauer-Ladenburg equation, and the population inversion is realized according to a simplified evaluation. The results suggest that the fluorotellurite glass system could be a promising candidate for the development of optical amplifiers and lasers operating at the relatively unexplored 1.2 μm wavelength region.

  12. Airborne infrared mineral mapping survey of Marysvale, Utah

    NASA Technical Reports Server (NTRS)

    Collins, W.; Chang, S. H.

    1982-01-01

    Infrared spectroradiometer survey results from flights over the Marysvale, Utah district show that hydrothermal alteration mineralogy can be mapped using very rapid and effective airborne techniques. The system detects alteration mineral absorption band intensities in the infrared spectral region with high sensitivity. The higher resolution spectral features and high spectral differences characteristic of the various clay and carbonate minerals are also readily identified by the instrument allowing the mineralogy to be mapped as well as the mineralization intensity.

  13. Mapping of wildlife habitat in Farmington Bay, Utah

    NASA Technical Reports Server (NTRS)

    Jaynes, R. A.; Willie, R. D. (Principal Investigator)

    1982-01-01

    Mapping was accomplished through the interpretation of high-altitude color infrared photography. The feasibility of utilizing LANDSAT digital data to augment the analysis was explored; complex patterns of wildlife habitat and confusion of spectral classes resulted in the decision to make limited use of LANDSAT data in the analysis. The final product is a map which delineates wildlife habitat at a scale of 1:24,000. The map is registered to and printed on a screened U.S.G.S. quadrangle base map. Screened delineations of shoreline contours, mapped from a previous study, are also shown on the map. Intensive field checking of the map was accomplished for the Farmington Bay Waterfowl Management Area in August 1981; other areas on the map received only spot field checking.

  14. Human Prostate Cancer Hallmarks Map.

    PubMed

    Datta, Dipamoy; Aftabuddin, Md; Gupta, Dinesh Kumar; Raha, Sanghamitra; Sen, Prosenjit

    2016-08-01

    Human prostate cancer is a complex heterogeneous disease that mainly affects elder male population of the western world with a high rate of mortality. Acquisitions of diverse sets of hallmark capabilities along with an aberrant functioning of androgen receptor signaling are the central driving forces behind prostatic tumorigenesis and its transition into metastatic castration resistant disease. These hallmark capabilities arise due to an intense orchestration of several crucial factors, including deregulation of vital cell physiological processes, inactivation of tumor suppressive activity and disruption of prostate gland specific cellular homeostasis. The molecular complexity and redundancy of oncoproteins signaling in prostate cancer demands for concurrent inhibition of multiple hallmark associated pathways. By an extensive manual curation of the published biomedical literature, we have developed Human Prostate Cancer Hallmarks Map (HPCHM), an onco-functional atlas of human prostate cancer associated signaling and events. It explores molecular architecture of prostate cancer signaling at various levels, namely key protein components, molecular connectivity map, oncogenic signaling pathway map, pathway based functional connectivity map etc. Here, we briefly represent the systems level understanding of the molecular mechanisms associated with prostate tumorigenesis by considering each and individual molecular and cell biological events of this disease process.

  15. Human Prostate Cancer Hallmarks Map

    PubMed Central

    Datta, Dipamoy; Aftabuddin, Md.; Gupta, Dinesh Kumar; Raha, Sanghamitra; Sen, Prosenjit

    2016-01-01

    Human prostate cancer is a complex heterogeneous disease that mainly affects elder male population of the western world with a high rate of mortality. Acquisitions of diverse sets of hallmark capabilities along with an aberrant functioning of androgen receptor signaling are the central driving forces behind prostatic tumorigenesis and its transition into metastatic castration resistant disease. These hallmark capabilities arise due to an intense orchestration of several crucial factors, including deregulation of vital cell physiological processes, inactivation of tumor suppressive activity and disruption of prostate gland specific cellular homeostasis. The molecular complexity and redundancy of oncoproteins signaling in prostate cancer demands for concurrent inhibition of multiple hallmark associated pathways. By an extensive manual curation of the published biomedical literature, we have developed Human Prostate Cancer Hallmarks Map (HPCHM), an onco-functional atlas of human prostate cancer associated signaling and events. It explores molecular architecture of prostate cancer signaling at various levels, namely key protein components, molecular connectivity map, oncogenic signaling pathway map, pathway based functional connectivity map etc. Here, we briefly represent the systems level understanding of the molecular mechanisms associated with prostate tumorigenesis by considering each and individual molecular and cell biological events of this disease process. PMID:27476486

  16. THE 21 cm 'OUTER ARM' AND THE OUTER-GALAXY HIGH-VELOCITY CLOUDS: CONNECTED BY KINEMATICS, METALLICITY, AND DISTANCE

    SciTech Connect

    Tripp, Todd M.; Song Limin

    2012-02-20

    Using high-resolution ultraviolet spectra obtained with the Hubble Space Telescope Space Telescope Imaging Spectrograph and the Far Ultraviolet Spectroscopic Explorer, we study the metallicity, kinematics, and distance of the gaseous 'outer arm' (OA) and the high-velocity clouds (HVCs) in the outer Galaxy. We detect the OA in a variety of absorption lines toward two QSOs, H1821+643 and HS0624+6907. We search for OA absorption toward eight Galactic stars and detect it in one case, which constrains the OA Galactocentric radius to 9 kpc

  17. Intensity modulated proton therapy.

    PubMed

    Kooy, H M; Grassberger, C

    2015-07-01

    Intensity modulated proton therapy (IMPT) implies the electromagnetic spatial control of well-circumscribed "pencil beams" of protons of variable energy and intensity. Proton pencil beams take advantage of the charged-particle Bragg peak-the characteristic peak of dose at the end of range-combined with the modulation of pencil beam variables to create target-local modulations in dose that achieves the dose objectives. IMPT improves on X-ray intensity modulated beams (intensity modulated radiotherapy or volumetric modulated arc therapy) with dose modulation along the beam axis as well as lateral, in-field, dose modulation. The clinical practice of IMPT further improves the healthy tissue vs target dose differential in comparison with X-rays and thus allows increased target dose with dose reduction elsewhere. In addition, heavy-charged-particle beams allow for the modulation of biological effects, which is of active interest in combination with dose "painting" within a target. The clinical utilization of IMPT is actively pursued but technical, physical and clinical questions remain. Technical questions pertain to control processes for manipulating pencil beams from the creation of the proton beam to delivery within the patient within the accuracy requirement. Physical questions pertain to the interplay between the proton penetration and variations between planned and actual patient anatomical representation and the intrinsic uncertainty in tissue stopping powers (the measure of energy loss per unit distance). Clinical questions remain concerning the impact and management of the technical and physical questions within the context of the daily treatment delivery, the clinical benefit of IMPT and the biological response differential compared with X-rays against which clinical benefit will be judged. It is expected that IMPT will replace other modes of proton field delivery. Proton radiotherapy, since its first practice 50 years ago, always required the highest level of

  18. High intensity ultrasound.

    PubMed

    ter Haar, G

    2001-03-01

    High-intensity focused ultrasound (HIFU) is a technique that was first investigated in the 1940s as a method of destroying selective regions within the brain in neuro-surgical An ultrasound beam can be brought to a tight focus at a distance from its source, and if sufficient energy is concentrated within the focus, the cells lying within this focal volume are killed, whereas those lying elsewhere are spared. This is a noninvasive method of producing selective and trackless tissue destruction in deep seated targets in the body, without damage to overlying tissues. This field, known both as HIFU and focused ultrasound surgery (FUS), is reviewed in this article.

  19. Intensive Care Unit Psychosis

    PubMed Central

    Monks, Richard C.

    1984-01-01

    Patients who become psychotic in intensive care units are usually suffering from delirium. Underlying causes of delirium such as anxiety, sleep deprivation, sensory deprivation and overload, immobilization, an unfamiliar environment and pain, are often preventable or correctable. Early detection, investigation and treatment may prevent significant mortality and morbidity. The patient/physician relationship is one of the keystones of therapy. More severe cases may require psychopharmacological measures. The psychotic episode is quite distressing to the patient and family; an educative and supportive approach by the family physician may be quite helpful in patient rehabilitation. PMID:21279016

  20. Intensity modulated proton therapy

    PubMed Central

    Grassberger, C

    2015-01-01

    Intensity modulated proton therapy (IMPT) implies the electromagnetic spatial control of well-circumscribed “pencil beams” of protons of variable energy and intensity. Proton pencil beams take advantage of the charged-particle Bragg peak—the characteristic peak of dose at the end of range—combined with the modulation of pencil beam variables to create target-local modulations in dose that achieves the dose objectives. IMPT improves on X-ray intensity modulated beams (intensity modulated radiotherapy or volumetric modulated arc therapy) with dose modulation along the beam axis as well as lateral, in-field, dose modulation. The clinical practice of IMPT further improves the healthy tissue vs target dose differential in comparison with X-rays and thus allows increased target dose with dose reduction elsewhere. In addition, heavy-charged-particle beams allow for the modulation of biological effects, which is of active interest in combination with dose “painting” within a target. The clinical utilization of IMPT is actively pursued but technical, physical and clinical questions remain. Technical questions pertain to control processes for manipulating pencil beams from the creation of the proton beam to delivery within the patient within the accuracy requirement. Physical questions pertain to the interplay between the proton penetration and variations between planned and actual patient anatomical representation and the intrinsic uncertainty in tissue stopping powers (the measure of energy loss per unit distance). Clinical questions remain concerning the impact and management of the technical and physical questions within the context of the daily treatment delivery, the clinical benefit of IMPT and the biological response differential compared with X-rays against which clinical benefit will be judged. It is expected that IMPT will replace other modes of proton field delivery. Proton radiotherapy, since its first practice 50 years ago, always required the

  1. Mapping the Heart

    ERIC Educational Resources Information Center

    Hulse, Grace

    2012-01-01

    In this article, the author describes how her fourth graders made ceramic heart maps. The impetus for this project came from reading "My Map Book" by Sara Fanelli. This book is a collection of quirky, hand-drawn and collaged maps that diagram a child's world. There are maps of her stomach, her day, her family, and her heart, among others. The…

  2. Ground-Based Sensing System for Weed Mapping in Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A ground-based weed mapping system was developed to measure weed intensity and distribution in a cotton field. The weed mapping system includes WeedSeeker® PhD600 sensor modules to indicate the presence of weeds between rows, a GPS receiver to provide spatial information, and a data acquisition and ...

  3. Angola Seismicity MAP

    NASA Astrophysics Data System (ADS)

    Neto, F. A. P.; Franca, G.

    2014-12-01

    The purpose of this job was to study and document the Angola natural seismicity, establishment of the first database seismic data to facilitate consultation and search for information on seismic activity in the country. The study was conducted based on query reports produced by National Institute of Meteorology and Geophysics (INAMET) 1968 to 2014 with emphasis to the work presented by Moreira (1968), that defined six seismogenic zones from macro seismic data, with highlighting is Zone of Sá da Bandeira (Lubango)-Chibemba-Oncócua-Iona. This is the most important of Angola seismic zone, covering the epicentral Quihita and Iona regions, geologically characterized by transcontinental structure tectono-magmatic activation of the Mesozoic with the installation of a wide variety of intrusive rocks of ultrabasic-alkaline composition, basic and alkaline, kimberlites and carbonatites, strongly marked by intense tectonism, presenting with several faults and fractures (locally called corredor de Lucapa). The earthquake of May 9, 1948 reached intensity VI on the Mercalli-Sieberg scale (MCS) in the locality of Quihita, and seismic active of Iona January 15, 1964, the main shock hit the grade VI-VII. Although not having significant seismicity rate can not be neglected, the other five zone are: Cassongue-Ganda-Massano de Amorim; Lola-Quilengues-Caluquembe; Gago Coutinho-zone; Cuima-Cachingues-Cambândua; The Upper Zambezi zone. We also analyzed technical reports on the seismicity of the middle Kwanza produced by Hidroproekt (GAMEK) region as well as international seismic bulletins of the International Seismological Centre (ISC), United States Geological Survey (USGS), and these data served for instrumental location of the epicenters. All compiled information made possible the creation of the First datbase of seismic data for Angola, preparing the map of seismicity with the reconfirmation of the main seismic zones defined by Moreira (1968) and the identification of a new seismic

  4. National Atlas maps

    USGS Publications Warehouse

    ,

    1991-01-01

    The National Atlas of the United States of America was published by the U.S. Geological Survey in 1970. Its 765 maps and charts are on 335 14- by 19-inch pages. Many of the maps span facing pages. It's worth a quick trip to the library just to leaf through all 335 pages of this book. Rapid scanning of its thematic maps yields rich insights to the geography of issues of continuing national interest. On most maps, the geographic patterns are still valid, though the data are not current. The atlas is out of print, but many of its maps can be purchased separately. Maps that span facing pages in the atlas are printed on one sheet. The maps dated after 1970 are either revisions of original atlas maps, or new maps published in atlas format. The titles of the separate maps are listed here.

  5. [Safety of intensive sweeteners].

    PubMed

    Lugasi, Andrea

    2016-04-01

    Nowadays low calorie or intesive sweeteners are getting more and more popular. These sweeteners can be placed to the market and used as food additives according to the recent EU legislation. In the meantime news are coming out one after the other stating that many of these artificial intensive sweeteners can cause cancer - the highest risk has been attributed to aspartam. Low calorie sweeteners, just like all the other additives can be authorized after strickt risk assessment procedure according to the recent food law. Only after the additive has gone through these procedure can be placed to the list of food additives, which contains not only the range of food these additives can be used, but also the recommended highest amount of daily consumption. European Food Safety Authority considering the latest scientific examination results, evaluates regularly the safety of sweeteners authorized earlier. Until now there is no evidence found to question the safety of the authorized intensive sweeteners. Orv. Hetil., 2016, 157(Suppl. 1), 14-28. PMID:27088715

  6. French intensive truck garden

    SciTech Connect

    Edwards, T D

    1983-01-01

    The French Intensive approach to truck gardening has the potential to provide substantially higher yields and lower per acre costs than do conventional farming techniques. It was the intent of this grant to show that there is the potential to accomplish the gains that the French Intensive method has to offer. It is obvious that locally grown food can greatly reduce transportation energy costs but when there is the consideration of higher efficiencies there will also be energy cost reductions due to lower fertilizer and pesticide useage. As with any farming technique, there is a substantial time interval for complete soil recovery after there have been made substantial soil modifications. There were major crop improvements even though there was such a short time since the soil had been greatly disturbed. It was also the intent of this grant to accomplish two other major objectives: first, the garden was managed under organic techniques which meant that there were no chemical fertilizers or synthetic pesticides to be used. Second, the garden was constructed so that a handicapped person in a wheelchair could manage and have a higher degree of self sufficiency with the garden. As an overall result, I would say that the garden has taken the first step of success and each year should become better.

  7. High intensity proton synchrotrons

    NASA Astrophysics Data System (ADS)

    Craddock, M. K.

    1986-10-01

    Strong initiatives are being pursued in a number of countries for the construction of ``kaon factory'' synchrotrons capable of producing 100 times more intense proton beams than those available now from machines such as the Brookhaven AGS and CERN PS. Such machines would yield equivalent increases in the fluxes of secondary particles (kaons, pions, muons, antiprotons, hyperons and neutrinos of all varieties)—or cleaner beams for a smaller increase in flux—opening new avenues to various fundamental questions in both particle and nuclear physics. Major areas of investigation would be rare decay modes, CP violation, meson and hadron spectroscopy, antinucleon interactions, neutrino scattering and oscillations, and hypernuclear properties. Experience with the pion factories has already shown how high beam intensities make it possible to explore the ``precision frontier'' with results complementary to those achievable at the ``energy frontier''. This paper will describe proposals for upgrading and AGS and for building kaon factories in Canada, Europe, Japan and the United States, emphasizing the novel aspects of accelerator design required to achieve the desired performance (typically 100 μA at 30 GeV).

  8. Google Maps: You Are Here

    ERIC Educational Resources Information Center

    Jacobsen, Mikael

    2008-01-01

    Librarians use online mapping services such as Google Maps, MapQuest, Yahoo Maps, and others to check traffic conditions, find local businesses, and provide directions. However, few libraries are using one of Google Maps most outstanding applications, My Maps, for the creation of enhanced and interactive multimedia maps. My Maps is a simple and…

  9. Map reading tools for map libraries.

    USGS Publications Warehouse

    Greenberg, G.L.

    1982-01-01

    Engineers, navigators and military strategists employ a broad array of mechanical devices to facilitate map use. A larger number of map users such as educators, students, tourists, journalists, historians, politicians, economists and librarians are unaware of the available variety of tools which can be used with maps to increase the speed and efficiency of their application and interpretation. This paper identifies map reading tools such as coordinate readers, protractors, dividers, planimeters, and symbol-templets according to a functional classification. Particularly, arrays of tools are suggested for use in determining position, direction, distance, area and form (perimeter-shape-pattern-relief). -from Author

  10. Portable intensity interferometry

    NASA Astrophysics Data System (ADS)

    Horch, Elliott P.; Camarata, Matthew A.

    2012-07-01

    A limitation of the current generation of long baseline optical interferometers is the need to make the light interfere prior to detection. This is unlike the radio regime where signals can be recorded fast enough to use electronics to accomplish the same result. This paper describes a modern optical intensity interferometer based on electronics with picosecond timing resolution. The instrument will allow for portable optical interferometry with much larger baselines than currently possible by using existing large telescopes. With modern electronics, the limiting magnitude of the technique at a 4-m aperture size becomes competitive with some amplitude-based interferometers. The instrumentation will permit a wireless mode of operation with GPS clocking technology, extending the work to extremely large baselines. We discuss the basic observing strategy, a planned observational program at the Lowell Observatory 1.8-m and 1.0-m telescopes, and the science that can realistically be done with this instrumentation.

  11. Intense Magnetism in Supernovae

    NASA Astrophysics Data System (ADS)

    Thompson, C.

    2002-05-01

    Observations of the Soft Gamma Repeaters and Anomalous X-ray Pulsars have provided strong evidence for a class of neutron stars with magnetic fields exceeding 1015 G. This talk will overview the excellent prospects for generating such intense fields in a core-collapse supernova, with a focus on the violent convective motions believed to occur both inside and outside the neutrinosphere of the forming neutron star. I will also examine the effects of late fallback, and the role of (electron-type) neutrinos in aiding buoyant motions of the magnetic field. The case will be made that the SGRs and AXPs are distinguished from classical radio pulsars by a very rapid initial rotation of the neutron star.

  12. Humidification in intensive care.

    PubMed

    Joynt, G M; Lipman, J

    1994-03-01

    The normal physiological function of the upper respiratory tract is to filter and humidify inspired air. In intensive care units the upper respiratory tract is frequently bypassed. The importance of humidifying and warming the dry, cold, piped gas is well documented. The results of lack of adequate humidification include endotracheal tube obstruction, impairment of the mucociliary elevator and altered pulmonary function. Optimal levels of humidification are as yet undefined and useful clinical markers of adequate humidification are not available. As a result there is a bewildering array of humidification devices available at present, the most recent of which are heat and moisture exchangers with or without specific filtration properties. This article reviews available data on these humidification devices, and recommends an approach to their appropriate use, based on the probable physiological needs of individual patients.

  13. Ordered Restriction Maps of Saccharomyces cerevisiae Chromosomes Constructed by Optical Mapping

    NASA Astrophysics Data System (ADS)

    Schwartz, David C.; Li, Xiaojun; Hernandez, Luis I.; Ramnarain, Satyadarshan P.; Huff, Edward J.; Wang, Yu-Ker

    1993-10-01

    A light microscope-based technique for rapidly constructing ordered physical maps of chromosomes has been developed. Restriction enzyme digestion of elongated individual DNA molecules (about 0.2 to 1.0 megabases in size) was imaged by fluorescence microscopy after fixation in agarose gel. The size of the resulting individual restriction fragments was determined by relative fluorescence intensity and apparent molecular contour length. Ordered restriction maps were then created from genomic DNA without reliance on cloned or amplified sequences for hybridization or analytical gel electrophoresis. Initial application of optical mapping is described for Saccharomyces cerevisiae chromosomes.

  14. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  15. Mapping the Future of Map Librarianship.

    ERIC Educational Resources Information Center

    Lang, Laura

    1992-01-01

    Discussion of electronic versions of maps focuses on TIGER files (i.e., electronic maps distributed by the U.S. Bureau of the Census) and their manipulation using geographic information system (GIS) technology. Topics addressed include applications of GIS software, projects to improve access to TIGER files, and the role of GIS in libraries. (MES)

  16. An Atlas of ShakeMaps for Selected Global Earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  17. Mapping Human Epigenomes

    PubMed Central

    Rivera, Chloe M.; Ren, Bing

    2013-01-01

    As the second dimension to the genome, the epigenome contains key information specific to every type of cells. Thousands of human epigenome maps have been produced in recent years thanks to rapid development of high throughput epigenome mapping technologies. In this review, we discuss the current epigenome mapping toolkit and utilities of epigenome maps. We focus particularly on mapping of DNA methylation, chromatin modification state and chromatin structures, and emphasize the use of epigenome maps to delineate human gene regulatory sequences and developmental programs. We also provide a perspective on the progress of the epigenomics field and challenges ahead. PMID:24074860

  18. Density Equalizing Map Projections

    1995-07-01

    A geographic map is mathematically transformed so that the subareas of the map are proportional to a given quantity such as population. In other words, population density is equalized over the entire map. The transformed map can be used as a display tool, or it can be statistically analyzed. For example, cases of disease plotted on the transformed map should be uniformly distributed at random, if disease rates are everywhere equal. Geographic clusters of diseasemore » can be readily identified, and their statistical significance determined, on a density equalized map.« less

  19. Burn Severities, Fire Intensities, and Impacts to Major Vegetation Types from the Cerro Grande Fire

    SciTech Connect

    Balice, Randy G.; Bennett, Kathryn D.; Wright, Marjorie A.

    2004-12-15

    The Cerro Grande Fire resulted in major impacts and changes to the ecosystems that were burned. To partially document these effects, we estimated the acreage of major vegetation types that were burned at selected burn severity levels and fire intensity levels. To accomplish this, we adopted independently developed burn severity and fire intensity maps, in combination with a land cover map developed for habitat management purposes, as a basis for the analysis. To provide a measure of confidence in the acreage estimates, the accuracies of these maps were also assessed. In addition, two other maps of comparable quality were assessed for accuracy: one that was developed for mapping fuel risk and a second map that resulted from a preliminary application of an evolutionary computation software system, called GENIE.

  20. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  1. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  2. Intensity Frontier Instrumentation

    SciTech Connect

    Kettell S.; Rameika, R.; Tshirhart, B.

    2013-09-24

    The fundamental origin of flavor in the Standard Model (SM) remains a mystery. Despite the roughly eighty years since Rabi asked “Who ordered that?” upon learning of the discovery of the muon, we have not understood the reason that there are three generations or, more recently, why the quark and neutrino mixing matrices and masses are so different. The solution to the flavor problem would give profound insights into physics beyond the Standard Model (BSM) and tell us about the couplings and the mass scale at which the next level of insight can be found. The SM fails to explain all observed phenomena: new interactions and yet unseen particles must exist. They may manifest themselves by causing SM reactions to differ from often very precise predictions. The Intensity Frontier (1) explores these fundamental questions by searching for new physics in extremely rare processes or those forbidden in the SM. This often requires massive and/or extremely finely tuned detectors.

  3. Emotionally Intense Science Activities

    NASA Astrophysics Data System (ADS)

    King, Donna; Ritchie, Stephen; Sandhu, Maryam; Henderson, Senka

    2015-08-01

    Science activities that evoke positive emotional responses make a difference to students' emotional experience of science. In this study, we explored 8th Grade students' discrete emotions expressed during science activities in a unit on Energy. Multiple data sources including classroom videos, interviews and emotion diaries completed at the end of each lesson were analysed to identify individual student's emotions. Results from two representative students are presented as case studies. Using a theoretical perspective drawn from theories of emotions founded in sociology, two assertions emerged. First, during the demonstration activity, students experienced the emotions of wonder and surprise; second, during a laboratory activity, students experienced the intense positive emotions of happiness/joy. Characteristics of these activities that contributed to students' positive experiences are highlighted. The study found that choosing activities that evoked strong positive emotional experiences, focused students' attention on the phenomenon they were learning, and the activities were recalled positively. Furthermore, such positive experiences may contribute to students' interest and engagement in science and longer term memorability. Finally, implications for science teachers and pre-service teacher education are suggested.

  4. Seismicity map of the State of Maine

    USGS Publications Warehouse

    Stover, C.W.; Barnhard, L.M.; Reagor, B.G.; Algermissen, S.T.

    1981-01-01

    The earthquake data shown on this map and listed in table 1 are a list of earthquakes that were originally used in preparing the Seismic Risk Studies in the United States (Algermissen, 1969) which have been recompiled and updated through 1977.  These data have been reexamined which resulted in some revisions of epicenters and intensities as well as assignment of intensities to earthquakes that previously had none assigened.  Intensity values were updated from new and additional data soureces that were not available at the time of original compilation.  Some epicenters were relocated on the basis of new informaition.  The data shown in table 1 are estimates of the most accurate epicenter, magnitude, and intensity of each earthquake, on the basis of historical and current information.  Some of the aftershocks from large earthquakes are listed but are incomplete in many instances, especialy for ones that occurred before seismic instruments were in universal usage.  Only earthquakes located within the borders of the states of Maine are listed.  This map superceeds Miscellaneous Field Sudies Map MF-845.

  5. New map data catalog

    NASA Astrophysics Data System (ADS)

    Map byproducts, including aerial photographs, color separations, map data in computer form, and other materials used in or produced during mapmaking, are described in a new catalog published by the U.S. Geological Survey.The 48-page hardcover catalog is the first listing of the unpublished USGS civilian cartographic holdings. It covers such items as mapping photographs, computer-enhanced LANDSAT pictures of Earth, cartographic data in computer form, microfilm and microfiche records, and a variety of features, including color separations, made in compiling and printing maps. The catalog also describes out-of-print maps available from USGS, along with land-use and land-cover maps, and other unusual items, such as slope maps and orthophotoquads. The catalog explains how to order advance copies of maps before they are published.

  6. Riparian Wetlands: Mapping

    EPA Science Inventory

    Riparian wetlands are critical systems that perform functions and provide services disproportionate to their extent in the landscape. Mapping wetlands allows for better planning, management, and modeling, but riparian wetlands present several challenges to effective mapping due t...

  7. Active Fire Mapping Program

    MedlinePlus

    ... Incidents (Home) New Large Incidents Fire Detection Maps MODIS Satellite Imagery VIIRS Satellite Imagery Fire Detection GIS ... Data Web Services Latest Detected Fire Activity Other MODIS Products Frequently Asked Questions About Active Fire Maps ...

  8. Creative Concept Mapping.

    ERIC Educational Resources Information Center

    Brown, David S.

    2002-01-01

    Recommends the use of concept mapping in science teaching and proposes that it be presented as a creative activity. Includes a sample lesson plan of a potato stamp concept mapping activity for astronomy. (DDR)

  9. Using maps in genealogy

    USGS Publications Warehouse

    ,

    1994-01-01

    In genealogy, maps are most often used as clues to where public or other records about an ancestor are likely to be found. Searching for maps seldom begins until a newcomer to genealogy has mastered basic genealogical routines

  10. Linkage map integration

    SciTech Connect

    Collins, A.; Teague, J.; Morton, N.E.; Keats, B.J.

    1996-08-15

    The algorithms that drive the map+ program for locus-oriented linkage mapping are presented. They depend on the enhanced location database program ldb+ to specify an initial comprehensive map that includes all loci in the summary lod file. Subsequently the map may be edited or order constrained and is automatically improved by estimating the location of each locus conditional on the remainder, beginning with the most discrepant loci. Operating characteristics permit rapid and accurate construction of linkage maps with several hundred loci. The map+ program also performs nondisjunction mapping with tests of nonstandard recombination. We have released map+ on Internet as a source program in the C language together with the location database that now includes the LODSOURCE database. 28 refs., 5 tabs.

  11. Spatial variability of "Did You Feel It?" intensity data: insights into sampling biases in historical earthquake intensity distributions

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    Recent parallel development of improved quantitative methods to analyze intensity distributions for historical earthquakes and of web‐based systems for collecting intensity data for modern earthquakes provides an opportunity to reconsider not only important individual historical earthquakes but also the overall characterization of intensity distributions for historical events. The focus of this study is a comparison between intensity distributions of historical earthquakes with those from modern earthquakes for which intensities have been determined by the U.S. Geological Survey “Did You Feel It?” (DYFI) website (see Data and Resources). As an example of a historical earthquake, I focus initially on the 1843 Marked Tree, Arkansas, event. Its magnitude has been previously estimated as 6.0–6.2. I first reevaluate the macroseismic effects of this earthquake, assigning intensities using a traditional approach, and estimate a preferred magnitude of 5.4. Modified Mercalli intensity (MMI) values for the Marked Tree earthquake are higher, on average, than those from the 2011 >Mw 5.8 Mineral, Virginia, earthquake for distances ≤500  km but comparable or lower on average at larger distances, with a smaller overall felt extent. Intensity distributions for other moderate historical earthquakes reveal similar discrepancies; the discrepancy is also even more pronounced using earlier published intensities for the 1843 earthquake. I discuss several hypotheses to explain the discrepancies, including the possibility that intensity values associated with historical earthquakes are commonly inflated due to reporting/sampling biases. A detailed consideration of the DYFI intensity distribution for the Mineral earthquake illustrates how reporting and sampling biases can account for historical earthquake intensity biases as high as two intensity units and for the qualitative difference in intensity distance decays for modern versus historical events. Thus, intensity maps for

  12. Building Better Volcanic Hazard Maps Through Scientific and Stakeholder Collaboration

    NASA Astrophysics Data System (ADS)

    Thompson, M. A.; Lindsay, J. M.; Calder, E.

    2015-12-01

    All across the world information about natural hazards such as volcanic eruptions, earthquakes and tsunami is shared and communicated using maps that show which locations are potentially exposed to hazards of varying intensities. Unlike earthquakes and tsunami, which typically produce one dominant hazardous phenomenon (ground shaking and inundation, respectively) volcanic eruptions can produce a wide variety of phenomena that range from near-vent (e.g. pyroclastic flows, ground shaking) to distal (e.g. volcanic ash, inundation via tsunami), and that vary in intensity depending on the type and location of the volcano. This complexity poses challenges in depicting volcanic hazard on a map, and to date there has been no consistent approach, with a wide range of hazard maps produced and little evaluation of their relative efficacy. Moreover, in traditional hazard mapping practice, scientists analyse data about a hazard, and then display the results on a map that is then presented to stakeholders. This one-way, top-down approach to hazard communication does not necessarily translate into effective hazard education, or, as tragically demonstrated by Nevado del Ruiz, Columbia in 1985, its use in risk mitigation by civil authorities. Furthermore, messages taken away from a hazard map can be strongly influenced by its visual design. Thus, hazard maps are more likely to be useful, usable and used if relevant stakeholders are engaged during the hazard map process to ensure a) the map is designed in a relevant way and b) the map takes into account how users interpret and read different map features and designs. The IAVCEI Commission on Volcanic Hazards and Risk has recently launched a Hazard Mapping Working Group to collate some of these experiences in graphically depicting volcanic hazard from around the world, including Latin America and the Caribbean, with the aim of preparing some Considerations for Producing Volcanic Hazard Maps that may help map makers in the future.

  13. Wetland inundation mapping and change monitoring using landsat and airborne LiDAR data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a new approach for mapping wetland inundation change using Landsat and LiDAR intensity data. In this approach, LiDAR data were used to derive highly accurate reference subpixel inundation percentage (SIP) maps at the 30-m resolution. The reference SIP maps were then used to est...

  14. What Do Maps Show?

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

  15. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  16. Oil Exploration Mapping

    NASA Technical Reports Server (NTRS)

    1994-01-01

    After concluding an oil exploration agreement with the Republic of Yemen, Chevron International needed detailed geologic and topographic maps of the area. Chevron's remote sensing team used imagery from Landsat and SPOT, combining images into composite views. The project was successfully concluded and resulted in greatly improved base maps and unique topographic maps.

  17. Applications of Concept Mapping

    ERIC Educational Resources Information Center

    De Simone, Christina

    2007-01-01

    This article reviews three major uses of the concept-mapping strategies for postsecondary learning: the external representation of concept maps as an external scratch pad to represent major ideas and their organization, the mental construction of concept maps when students are seeking a time-efficient tool, and the electronic construction and…

  18. Reading Angles in Maps

    ERIC Educational Resources Information Center

    Izard, Véronique; O'Donnell, Evan; Spelke, Elizabeth S.

    2014-01-01

    Preschool children can navigate by simple geometric maps of the environment, but the nature of the geometric relations they use in map reading remains unclear. Here, children were tested specifically on their sensitivity to angle. Forty-eight children (age 47:15-53:30 months) were presented with fragments of geometric maps, in which angle sections…

  19. Mapping with Young Children.

    ERIC Educational Resources Information Center

    Sunal, Cynthia Szymanski; Warash, Bobbi Gibson

    Techniques for encouraging young children to discover the purpose and use of maps are discussed. Motor activity and topological studies form a base from which the teacher and children can build a mapping program of progressive sophistication. Concepts important to mapping include boundaries, regions, exteriors, interiors, holes, order, point of…

  20. Using maps in genealogy

    USGS Publications Warehouse

    ,

    1999-01-01

    Maps are one of many sources you may need to complete a family tree. In genealogical research, maps can provide clues to where our ancestors may have lived and where to look for written records about them. Beginners should master basic genealogical research techniques before starting to use topographic maps.

  1. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  2. Mapping Wildfires In Nearly Real Time

    NASA Technical Reports Server (NTRS)

    Nichols, Joseph D.; Parks, Gary S.; Denning, Richard F.; Ibbott, Anthony C.; Scott, Kenneth C.; Sleigh, William J.; Voss, Jeffrey M.

    1993-01-01

    Airborne infrared-sensing system flies over wildfire as infrared detector in system and navigation subsystem generate data transmitted to firefighters' camp. There, data plotted in form of map of fire, including approximate variations of temperature. System, called Firefly, reveals position of fires and approximate thermal intensities of regions within fires. Firefighters use information to manage and suppress fires. Used for other purposes with minor modifications, such as to spot losses of heat in urban areas and to map disease and pest infestation in vegetation.

  3. Mapping Hydrogen in the Galaxy, Galactic Halo, and Local Group with ALFA: The GALFA-H I Survey Starting with TOGS

    NASA Astrophysics Data System (ADS)

    Gibson, S. J.; Douglas, K. A.; Heiles, C.; Korpela, E. J.; Peek, J. E. G.; Putman, M. E.; Stanimirović, S.

    2008-08-01

    Radio observations of gas in the Milky Way and Local Group are vital for understanding how galaxies function as systems. The unique sensitivity of Arecibo's 305 m dish, coupled with the 7-beam Arecibo L-Band Feed Array (ALFA), provides an unparalleled tool for investigating the full range of interstellar phenomena traced by the H I 21 cm line. The GALFA (Galactic ALFA) H I Survey is mapping the entire Arecibo sky over a velocity range of -700 to +700 km s-1 with 0.2 km s-1 velocity channels and an angular resolution of 3.4'. We present highlights from the TOGS (Turn On GALFA Survey) portion of GALFA-H I, which is covering thousands of square degrees in commensal drift scan observations with the ALFALFA and AGES extragalactic ALFA surveys. This work is supported in part by the National Astronomy and Ionosphere Center, operated by Cornell University under cooperative agreement with the National Science Foundation.

  4. Linkage Analysis and QTL Mapping Using SNP Dosage Data in a Tetraploid Potato Mapping Population

    PubMed Central

    Hackett, Christine A.; McLean, Karen; Bryan, Glenn J.

    2013-01-01

    New sequencing and genotyping technologies have enabled researchers to generate high density SNP genotype data for mapping populations. In polyploid species, SNP data usually contain a new type of information, the allele dosage, which is not used by current methodologies for linkage analysis and QTL mapping. Here we extend existing methodology to use dosage data on SNPs in an autotetraploid mapping population. The SNP dosages are inferred from allele intensity ratios using normal mixture models. The steps of the linkage analysis (testing for distorted segregation, clustering SNPs, calculation of recombination fractions and LOD scores, ordering of SNPs and inference of parental phase) are extended to use the dosage information. For QTL analysis, the probability of each possible offspring genotype is inferred at a grid of locations along the chromosome from the ordered parental genotypes and phases and the offspring dosages. A normal mixture model is then used to relate trait values to the offspring genotypes and to identify the most likely locations for QTLs. These methods are applied to analyse a tetraploid potato mapping population of parents and 190 offspring, genotyped using an Infinium 8300 Potato SNP Array. Linkage maps for each of the 12 chromosomes are constructed. The allele intensity ratios are mapped as quantitative traits to check that their position and phase agrees with that of the corresponding SNP. This analysis confirms most SNP positions, and eliminates some problem SNPs to give high-density maps for each chromosome, with between 74 and 152 SNPs mapped and between 100 and 300 further SNPs allocated to approximate bins. Low numbers of double reduction products were detected. Overall 3839 of the 5378 polymorphic SNPs can be assigned putative genetic locations. This methodology can be applied to construct high-density linkage maps in any autotetraploid species, and could also be extended to higher autopolyploids. PMID:23704960

  5. Map projections for larger-scale mapping

    NASA Technical Reports Server (NTRS)

    Snyder, J. P.

    1982-01-01

    For the U.S. Geological Survey maps at 1:1,000,000-scale and larger, the most common projections are conformal, such as the Transverse Mercator and Lambert Conformal Conic. Projections for these scales should treat the Earth as an ellipsoid. In addition, the USGS has conceived and designed some new projections, including the Space Oblique Mercator, the first map projection designed to permit low-distortion mapping of the Earth from satellite imagery, continuously following the groundtrack. The USGS has programmed nearly all pertinent projection equations for inverse and forward calculations. These are used to plot maps or to transform coordinates from one projection to another. The projections in current use are described.

  6. Sodium Velocity Maps on Mercury

    NASA Technical Reports Server (NTRS)

    Potter, A. E.; Killen, R. M.

    2011-01-01

    The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.

  7. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  8. On genetic map functions

    SciTech Connect

    Zhao, Hongyu; Speed, T.P.

    1996-04-01

    Various genetic map functions have been proposed to infer the unobservable genetic distance between two loci from the observable recombination fraction between them. Some map functions were found to fit data better than others. When there are more than three markers, multilocus recombination probabilities cannot be uniquely determined by the defining property of map functions, and different methods have been proposed to permit the use of map functions to analyze multilocus data. If for a given map function, there is a probability model for recombination that can give rise to it, then joint recombination probabilities can be deduced from this model. This provides another way to use map functions in multilocus analysis. In this paper we show that stationary renewal processes give rise to most of the map functions in the literature. Furthermore, we show that the interevent distributions of these renewal processes can all be approximated quite well by gamma distributions. 43 refs., 4 figs.

  9. Effect of noise intensity and illumination intensity on visual performance.

    PubMed

    Lin, Chin-Chiuan

    2014-10-01

    The results of Experiment 1 indicated that noise and illumination intensity have a significant effect on character identification performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 and 800 lux than at 200 lux. However, the interaction of noise and illumination intensity did not significantly affect visual performance. The results of Experiment 2 indicated that noise and illumination intensity also had a significant effect on reading comprehension performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 lux than at 200 and 800 lux. Furthermore, reading comprehension performance was better at 500 lux lighting and 30 dBA noise than with 800 lux and 90 dBA. High noise intensity impaired visual performance, and visual performance at normal illumination intensity was better than at other illumination intensities. The interaction of noise and illumination had a significant effect on reading comprehension. These results indicate that noise intensity lower than 30 dBA and illumination intensity approximately 500 lux might be the optimal conditions for visual work.

  10. Effect of noise intensity and illumination intensity on visual performance.

    PubMed

    Lin, Chin-Chiuan

    2014-10-01

    The results of Experiment 1 indicated that noise and illumination intensity have a significant effect on character identification performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 and 800 lux than at 200 lux. However, the interaction of noise and illumination intensity did not significantly affect visual performance. The results of Experiment 2 indicated that noise and illumination intensity also had a significant effect on reading comprehension performance, which was better at 30 dBA than at 60 and 90 dBA, and better at 500 lux than at 200 and 800 lux. Furthermore, reading comprehension performance was better at 500 lux lighting and 30 dBA noise than with 800 lux and 90 dBA. High noise intensity impaired visual performance, and visual performance at normal illumination intensity was better than at other illumination intensities. The interaction of noise and illumination had a significant effect on reading comprehension. These results indicate that noise intensity lower than 30 dBA and illumination intensity approximately 500 lux might be the optimal conditions for visual work. PMID:25153619

  11. Intensity attenuation in the Pannonian Basin

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Szanyi, Gyöngyvér

    2015-04-01

    Ground motion prediction equations play a key role in seismic hazard assessment. Earthquake hazard has to be expressed in macroseismic intensities in case of seismic risk estimations where a direct relation to the damage associated with ground shaking is needed. It can be also necessary for shake map generation where the map is used for prompt notification to the public, disaster management officers and insurance companies. Although only few instrumental strong motion data are recorded in the Pannonian Basin, there are numerous historical reports of past earthquakes since the 1763 Komárom earthquake. Knowing the intensity attenuation and comparing them with relations of other areas - where instrumental strong motion data also exist - can help us to choose from the existing instrumental ground motion prediction equations. The aim of this work is to determine an intensity attenuation formula for the inner part of the Pannonian Basin, which can be further used to find an adaptable ground motion prediction equation for the area. The crust below the Pannonian Basin is thin and warm and it is overlain by thick sediments. Thus the attenuation of seismic waves here is different from the attenuation in the Alp-Carpathian mountain belt. Therefore we have collected intensity data only from the inner part of the Pannonian Basin and defined the boundaries of the studied area by the crust thickness of 30 km (Windhoffer et al., 2005). 90 earthquakes from 1763 until 2014 have sufficient number of macroseismic data. Magnitude of the events varies from 3.0 to 6.6. We have used individual intensity points to eliminate the subjectivity of drawing isoseismals, the number of available intensity data is more than 3000. Careful quality control has been made on the dataset. The different types of magnitudes of the used earthquake catalogue have been converted to local and momentum magnitudes using relations determined for the Pannonian Basin. We applied the attenuation formula by Sorensen

  12. Principles of electroanatomic mapping.

    PubMed

    Bhakta, Deepak; Miller, John M

    2008-01-01

    Electrophysiologic testing and radiofrequency ablation have evolved as curative measures for a variety of rhythm disturbances. As experience in this field has grown, ablation is progressively being used to address more complex rhythm disturbances. Paralleling this trend are technological advancements to facilitate these efforts, including electroanatomic mapping (EAM). At present, several different EAM systems utilizing various technologies are available to facilitate mapping and ablation. Use of these systems has been shown to reduce fluoroscopic exposure and radiation dose, with less significant effects on procedural duration and success rates. Among the data provided by EAM are chamber reconstruction, tagging of important anatomic landmarks and ablation lesions, display of diagnostic and mapping catheters without using fluoroscopy, activation mapping, and voltage (or scar) mapping. Several EAM systems have specialized features, such as enhanced ability to map non-sustained or hemodynamically unstable arrhythmias, ability to display diagnostic as well as mapping catheter positions, and wide compatibility with a variety of catheters. Each EAM system has its strengths and weaknesses, and the system chosen must depend upon what data is required for procedural success (activation mapping, substrate mapping, cardiac geometry), the anticipated arrhythmia, the compatibility of the system with adjunctive tools (i.e. diagnostic and ablation catheters), and the operator's familiarity with the selected system. While EAM can offer significant assistance during an EP procedure, their incorrect or inappropriate application can substantially hamper mapping efforts and procedural success, and should not replace careful interpretation of data and strict adherence to electrophysiologic principles.

  13. Relationships between peak ground acceleration, peak ground velocity, and modified mercalli intensity in California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.

    1999-01-01

    We have developed regression relationships between Modified Mercalli Intensity (Imm) and peak ground acceleration (PGA) and velocity (PGV) by comparing horizontal peak ground motions to observed intensities for eight significant California earthquakes. For the limited range of Modified Mercalli intensities (Imm), we find that for peak acceleration with V ??? Imm ??? VIII, Imm = 3.66 log(PGA) - 1.66, and for peak velocity with V ??? Imm ??? IX, Imm = 3.47 log(PGV) + 2.35. From comparison with observed intensity maps, we find that a combined regression based on peak velocity for intensity > VII and on peak acceleration for intensity < VII is most suitable for reproducing observed Imm patterns, consistent with high intensities being related to damage (proportional to ground velocity) and with lower intensities determined by felt accounts (most sensitive to higher-frequency ground acceleration). These new Imm relationships are significantly different from the Trifunac and Brady (1975) correlations, which have been used extensively in loss estimation.

  14. SMOS sea surface salinity maps of the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Gabarro, Carolina; Olmedo, Estrella; Turiel, Antonio; Ballabrera-Poy, Joaquim; Martinez, Justino; Portabella, Marcos

    2016-04-01

    Salinity and temperature gradients drive the thermohaline circulation of the oceans, and play a key role in the ocean-atmosphere coupling. The strong and direct interactions between the ocean and the cryosphere (primarily through sea ice and ice shelves) is also a key ingredient of the thermohaline circulation. The ESA's Soil Moisture and Ocean Salinity (SMOS) mission, launched in 2009, has the objective measuring soil moisture over the continents and sea surface salinity over the oceans. Although the mission was originally conceived for hydrological and oceanographic studies [1], SMOS is also making inroads in the cryospheric monitoring. SMOS carries an innovative L-band (1.4 GHz, or 21-cm wavelength), passive interferometric radiometer (the so-called MIRAS) that measures the electromagnetic radiation emitted by the Earth's surface, at about 50 km spatial resolution wide swath (1200-km), and with a 3-day revisit time at the equator, but a more frequent one at the poles. Although the SMOS radiometer operating frequency offers almost the maximum sensitivity of the brightness temperature (TB) to sea surface salinity (SSS) variations, this is rather low, , i.e.,: 90% of ocean SSS values span a range of brightness temperatures of only 5K at L-band. This sensitivity is particularly low in cold waters. This implies that the SSS retrieval requires high radiometric performance. Since the SMOS launch, SSS Level 3 maps have been distributed by several expert laboratories including the Barcelona Expert Centre (BEC). However, since the TB sensitivity to SSS decreases with decreasing sea surface temperature (SST), large retrieval errors had been reported when retrieving salinity values at latitudes above 50⁰N. Two new processing algorithms, recently developed at BEC, have led to a considerable improvement of the SMOS data, allowing for the first time to derive SSS maps in cold waters. The first one is to empirically characterize and correct the systematic biases with six

  15. Macroseismic Intensities from the 2015 Gorkha, Nepal, Earthquake

    NASA Astrophysics Data System (ADS)

    Martin, S. S.; Hough, S. E.; Gahalaut, V. K.; Hung, C.

    2015-12-01

    The Mw 7.8 Gorkha, Nepal, earthquake, the largest central Himalayan earthquake in eighty-one years, yielded few instrumental recording of strong motion. To supplement these we collected 3800 detailed media and first-person accounts of macroseismic effects that included sufficiently detailed information to assign intensities. Our resultant macroseismic intensity map reveals the distribution of shaking in Nepal and the adjacent Gangetic basin. A key observation was that only in rare instances did near-field shaking intensities exceed intensity 8 on the European Macroseismic Scale (EMS), a level that corresponds with heavy damage or total collapse of many unengineered masonry structures. Within the Kathmandu Valley, intensities were generally 6-7 EMS, with generally lower intensities in the center of the valley than along the edges and foothills. This surprising (and fortunate) result can be explained by the nature of the mainshock ground motions, which were dominated by energy at periods significantly longer than the resonant periods of vernacular structures throughout Kathmandu. Outside the Kathmandu Valley the earthquake took a heavy toll on a number of remote villages, where many especially vulnerable masonry houses collapsed catastrophically in shaking equivalent to 7-8 EMS. Intensities were also generally higher along ridges and small hills, suggesting that topographic amplification played a significant role in controlling damage. The spatially rich intensity data set provides an opportunity to consider several key issues, including amplification of shaking in the Ganges basin, and the distribution of shaking across the rupture zone. Of note, relatively higher intensities within the near-field region are found to correlate with zones of enhanced high-frequency source radiation imaged by teleseismic back-projection (Avouac et al., 2015). We further reconsider intensities from a sequence of earthquakes on 26 August 1833, and conclude the largest of these ruptured

  16. Catfish production using intensive aeration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For the last 3 years, researchers at UAPB and NWAC have been monitoring and verifying production yields in intensively aerated catfish ponds with aeration rates greater than 6 hp/acre. We now have three years of data on commercial catfish production in intensively aerated ponds. With stocking densi...

  17. Accelerators for Intensity Frontier Research

    SciTech Connect

    Derwent, Paul; /Fermilab

    2012-05-11

    In 2008, the Particle Physics Project Prioritization Panel identified three frontiers for research in high energy physics, the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. In this paper, I will describe how Fermilab is configuring and upgrading the accelerator complex, prior to the development of Project X, in support of the Intensity Frontier.

  18. Correction of multi-spectral MRI intensity non-uniformity via spatially regularized feature condensing

    NASA Astrophysics Data System (ADS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2003-05-01

    In MRI, image intensity non-uniformity is an adverse phenomenon that increases inter-tissue overlapping. The aim of this study was to provide a novel general framework, named regularized feature condensing (RFC), for condensing the distribution of image features and apply it to correct intensity non-uniformity via spatial regularization. The proposed RCF method is an iterative procedure, which consists of four basic steps. First, creation of a feature space, which consists of multi-spectral image intensities and corresponding second derivatives. Second, estimation of the intensity condensing map in feature space, i.e. the estimation of the increase of feature probability densities by a well-established mean shift procedure. Third, regularization of intensity condensing map in image space, which yields the estimation of intensity non-uniformity. Fourth, applying the estimation of non-uniformity correction to the input image. In this way, the intensity distributions of distinct tissues are gradually condensed via spatial regularization. The method was tested on simulated and real MR brain images for which gold standard segmentations were available. The results showed that the method did not induce additional intensity variations in simulated uniform images and efficiently removed intensity non-uniformity in real MR brain images. The proposed RCF method is a powerful fully automated intensity non-uniformity correction method that makes no a prior assumptions on the image intensity distribution and provides non-parametric non-uniformity correction.

  19. An Extraordinary Magnetic Field Map of Mars

    NASA Technical Reports Server (NTRS)

    Connerney, J. E. P.; Acuna, M. H.; Ness, N. F.; Mitchell, D. L.; Lin, R. P.

    2004-01-01

    The Mars Global Surveyor spacecraft has completed two Mars years in nearly circular polar orbit at a nominal altitude of 400 km. The Mars crust is at least an order of magnitude more intensely magnetized than that of the Earth [1], and intriguing in both its global distribution and geometric properties [2,3]. Measurements of the vector magnetic field have been used to map the magnetic field of crustal origin to high accuracy [4]. We present here a new map of the magnetic field with an order of magnitude increased sensitivity to crustal magnetization. The map is assembled from > 2 full years of MGS night-side observations, and uses along-track filtering to greatly reduce noise due to external field variations.

  20. Segmentation and leaf sequencing for intensity modulated arc therapy

    SciTech Connect

    Gladwish, Adam; Oliver, Mike; Craig, Jeff; Chen, Jeff; Bauman, Glenn; Fisher, Barbara; Wong, Eugene

    2007-05-15

    A common method in generating intensity modulated radiation therapy (IMRT) plans consists of a three step process: an optimized fluence intensity map (IM) for each beam is generated via inverse planning, this IM is then segmented into discrete levels, and finally, the segmented map is translated into a set of MLC apertures via a leaf sequencing algorithm. To date, limited work has been done on this approach as it pertains to intensity modulated arc therapy (IMAT), specifically in regards to the latter two steps. There are two determining factors that separate IMAT segmentation and leaf sequencing from their IMRT equivalents: (1) the intrinsic 3D nature of the intensity maps (standard 2D maps plus the angular component), and (2) that the dynamic multileaf collimator (MLC) constraints be met using a minimum number of arcs. In this work, we illustrate a technique to create an IMAT plan that replicates Tomotherapy deliveries by applying IMAT specific segmentation and leaf-sequencing algorithms to Tomotherapy output sinograms. We propose and compare two alternative segmentation techniques, a clustering method, and a bottom-up segmentation method (BUS). We also introduce a novel IMAT leaf-sequencing algorithm that explicitly takes leaf movement constraints into consideration. These algorithms were tested with 51 angular projections of the output leaf-open sinograms generated on the Hi-ART II treatment planning system (Tomotherapy Inc.). We present two geometric phantoms and 2 clinical scenarios as sample test cases. In each case 12 IMAT plans were created, ranging from 2 to 7 intensity levels. Half were generated using the BUS segmentation and half with the clustering method. We report on the number of arcs produced as well as differences between Tomotherapy output sinograms and segmented IMAT intensity maps. For each case one plan for each segmentation method is chosen for full Monte Carlo dose calculation (NumeriX LLC) and dose volume histograms (DVH) are calculated

  1. Heat Capacity Mapping Mission

    NASA Technical Reports Server (NTRS)

    Nilsson, C. S.; Andrews, J. C.; Scully-Power, P.; Ball, S.; Speechley, G.; Latham, A. R. (Principal Investigator)

    1980-01-01

    The Tasman Front was delineated by airborne expendable bathythermograph survey; and an Heat Capacity Mapping Mission (HCMM) IR image on the same day shows the same principal features as determined from ground-truth. It is clear that digital enhancement of HCMM images is necessary to map ocean surface temperatures and when done, the Tasman Front and other oceanographic features can be mapped by this method, even through considerable scattered cloud cover.

  2. Regularity of mappings inverse to Sobolev mappings

    SciTech Connect

    Vodop'yanov, Sergei K

    2012-10-31

    For homeomorphisms {phi}:{Omega}{yields}{Omega}' on Euclidean domains in R{sup n}, n{>=}2, necessary and sufficient conditions ensuring that the inverse mapping belongs to a Sobolev class are investigated. The result obtained is used to describe a new two-index scale of homeomorphisms in some Sobolev class such that their inverses also form a two-index scale of mappings, in another Sobolev class. This scale involves quasiconformal mappings and also homeomorphisms in the Sobolev class W{sup 1}{sub n-1} such that rankD{phi}(x){<=}n-2 almost everywhere on the zero set of the Jacobian det D{phi}(x). Bibliography: 65 titles.

  3. Film Dosimetry for Intensity Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Benites-Rengifo, J.; Martínez-Dávalos, A.; Celis, M.; Lárraga, J.

    2004-09-01

    Intensity Modulated Radiation Therapy (IMRT) is an oncology treatment technique that employs non-uniform beam intensities to deliver highly conformal radiation to the targets while minimizing doses to normal tissues and critical organs. A key element for a successful clinical implementation of IMRT is establishing a dosimetric verification process that can ensure that delivered doses are consistent with calculated ones for each patient. To this end we are developing a fast quality control procedure, based on film dosimetry techniques, to be applied to the 6 MV Novalis linear accelerator for IMRT of the Instituto Nacional de Neurología y Neurocirugía (INNN) in Mexico City. The procedure includes measurements of individual fluence maps for a limited number of fields and dose distributions in 3D using extended dose-range radiographic film. However, the film response to radiation might depend on depth, energy and field size, and therefore compromise the accuracy of measurements. In this work we present a study of the dependence of Kodak EDR2 film's response on the depth, field size and energy, compared with those of Kodak XV2 film. The first aim is to devise a fast and accurate method to determine the calibration curve of film (optical density vs. doses) commonly called a sensitometric curve. This was accomplished by using three types of irradiation techniques: Step-and-shoot, dynamic and static fields.

  4. Chaotic Polynomial Maps

    NASA Astrophysics Data System (ADS)

    Zhang, Xu

    This paper introduces a class of polynomial maps in Euclidean spaces, investigates the conditions under which there exist Smale horseshoes and uniformly hyperbolic invariant sets, studies the chaotic dynamical behavior and strange attractors, and shows that some maps are chaotic in the sense of Li-Yorke or Devaney. This type of maps includes both the Logistic map and the Hénon map. For some diffeomorphisms with the expansion dimension equal to one or two in three-dimensional spaces, the conditions under which there exist Smale horseshoes and uniformly hyperbolic invariant sets on which the systems are topologically conjugate to the two-sided fullshift on finite alphabet are obtained; for some expanding maps, the chaotic region is analyzed by using the coupled-expansion theory and the Brouwer degree theory. For three types of higher-dimensional polynomial maps with degree two, the conditions under which there are Smale horseshoes and uniformly hyperbolic invariant sets are given, and the topological conjugacy between the maps on the invariant sets and the two-sided fullshift on finite alphabet is obtained. Some interesting maps with chaotic attractors and positive Lyapunov exponents in three-dimensional spaces are found by using computer simulations. In the end, two examples are provided to illustrate the theoretical results.

  5. BOREAS Hardcopy Maps

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nelson, Elizabeth; Newcomer, Jeffrey A.

    2000-01-01

    Boreal Ecosystem-Atmospheric Study (BOREAS) hardcopy maps are a collection of approximately 1,000 hardcopy maps representing the physical, climatological, and historical attributes of areas covering primarily the Manitoba and Saskatchewan provinces of Canada. These maps were collected by BOREAS Information System (BORIS) and Canada for Remote Sensing (CCRS) staff to provide basic information about site positions, manmade features, topography, geology, hydrology, land cover types, fire history, climate, and soils of the BOREAS study region. These maps are not available for distribution through the BOREAS project but may be used as an on-site resource. Information is provided within this document for individuals who want to order copies of these maps from the original map source. Note that the maps are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of the maps that are available. This inventory listing is available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). For hardcopies of the individual maps, contact the sources provided.

  6. Intense low energy positron beams

    SciTech Connect

    Lynn, K.G.; Jacobsen, F.M.

    1993-12-31

    Intense positron beams are under development or being considered at several laboratories. Already today a few accelerator based high intensity, low brightness e{sup +} beams exist producing of the order of 10{sup 8} {minus} 10{sup 9} e{sup +}/sec. Several laboratories are aiming at high intensity, high brightness e{sup +} beams with intensities greater than 10{sup 9} e{sup +}/sec and current densities of the order of 10{sup 13} {minus} 10{sup 14} e{sup +} sec{sup {minus}} {sup 1}cm{sup {minus}2}. Intense e{sup +} beams can be realized in two ways (or in a combination thereof) either through a development of more efficient B{sup +} moderators or by increasing the available activity of B{sup +} particles. In this review we shall mainly concentrate on the latter approach. In atomic physics the main trust for these developments is to be able to measure differential and high energy cross-sections in e{sup +} collisions with atoms and molecules. Within solid state physics high intensity, high brightness e{sup +} beams are in demand in areas such as the re-emission e{sup +} microscope, two dimensional angular correlation of annihilation radiation, low energy e{sup +} diffraction and other fields. Intense e{sup +} beams are also important for the development of positronium beams, as well as exotic experiments such as Bose condensation and Ps liquid studies.

  7. Seismicity map of the State of Illinois

    USGS Publications Warehouse

    Stover, C.W.; Reagor, B.G.; Algermissen, S.T.

    1979-01-01

    The earthquake data shown on this map and listed in table 1 are a list of earthquakes that were originally used in preparing the Seismic Risk Studies in the United States (Algermissen, 1969) which have been recompiled and updated through 1977. The data have been reexamined and intensities assigned where none had been assigned before, on the basis of available data. Other intensity values were updated from new and additional data sources that were not available at the time of original compilation. Some epicenters were relocated on the basis of new information. The data shown in table 1 are estimates of the most accurate epicenter, magnitude, and intensity of teach earthquake, on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed but are incomplete in many instances, especially for the ones that occurred before seismic instruments were in universal usage. 

  8. Maps and Map Learning in Social Studies

    ERIC Educational Resources Information Center

    Bednarz, Sarah Witham; Acheson, Gillian; Bednarz, Robert S.

    2006-01-01

    The importance of maps and other graphic representations has become more important to geography and geographers. This is due to the development and widespread diffusion of geographic (spatial) technologies. As computers and silicon chips have become more capable and less expensive, geographic information systems (GIS), global positioning satellite…

  9. Lyman Alpha Mapping Project (LAMP) Brightness Maps

    NASA Astrophysics Data System (ADS)

    Retherford, Kurt D.; Gladstone, G.; Stern, S.; Egan, A. F.; Miles, P. F.; Parker, J. W.; Greathouse, T. K.; Davis, M. W.; Slater, D. C.; Kaufmann, D. E.; Versteeg, M. H.; Feldman, P. D.; Hurley, D. M.; Pryor, W. R.; Hendrix, A. R.

    2010-10-01

    The Lyman Alpha Mapping Project (LAMP) is an ultraviolet (UV) spectrograph on the Lunar Reconnaissance Orbiter (LRO) that is designed to map the lunar albedo at far-UV wavelengths. LAMP primarily measures interplanetary Hydrogen Lyman-alpha sky-glow and far-UV starlight reflected from the night-side lunar surface, including permanently shadowed regions (PSRs) near the poles. Dayside observations are also obtained. Brightness maps sorted by wavelength (including the Lyman-alpha wavelength of 121.6 nm) are reported for the polar regions, with a few regions of interest reported in more detail. LAMP's spectral range of 58 nm to 196 nm includes a water ice spectral feature near 160 nm, which provides a diagnostic tool for detecting water on the lunar surface that is complementary to recent discoveries using infrared and radio frequency techniques. Progress towards producing far-UV albedo maps and searching for water ice signatures will be reported. We'll discuss how LAMP data may address questions regarding how water is formed on the moon, transported through the lunar atmosphere, and deposited in the PSRs.

  10. Maps--Map Reading and Aerial Photography. [2 Units].

    ERIC Educational Resources Information Center

    Haakonsen, Harry O., Ed.

    Included in this set of materials are two units: (1) Maps and Map Reading and (2) Aerial Photography. Each unit includes student guide sheets, reference material, and tape script. A set of 35mm slides and audiotapes are usually used with the materials. The unit on Maps and Map Reading is designed to develop map reading skills and the use of these…

  11. Mapping Symbolic Development.

    ERIC Educational Resources Information Center

    Perry, Martha Davis; Wolf, Dennie Palmer

    In an investigation of the development of mapping as distinct from drawing, 39 middle and lower class Cambridge, Massachusetts children in kindergarten and first- and second-grades were shown a small three-dimensional model town, asked to make a smaller, three-dimensional copy of the model, and then asked to make a map showing each item in the…

  12. Mapping Microbial Biodiversity

    SciTech Connect

    Stoner, Daphne Lisabet; Micah C. Geary; White, Luke James; Lee, Randy Dean; Brizzee, Julie Ann; Rodman, A. C.; Rope, Ronald C

    2001-09-01

    We report the development of a prototype database that "maps" microbial diversity in the context of the geochemical and geological environment and geographic location. When it is fully implemented, scientists will be able to conduct database searches, construct maps containing the information of interest, download files, and enter data over the Internet.

  13. Chizu Task Mapping Tool

    SciTech Connect

    2014-07-01

    Chizu is a tool for Mapping MPI processes or tasks to physical processors or nodes for optimizing communication performance. It takes the communication graph of a High Performance Computing (HPC) application and the interconnection topology of a supercomputer as input. It outputs a new MPI rand to processor mapping, which can be used when launching the HPC application.

  14. Map Skills with Meaning.

    ERIC Educational Resources Information Center

    Hamilton, Paula; And Others

    1993-01-01

    Presents hands-on activities to help teach elementary students map skills during Geography Awareness Week. The map skills are made fun by being incorporated into meaningful activities like learning about global resources, tracking the progress of sports teams, and conducting climate experiments in faraway places. (SM)

  15. MAP3K1

    PubMed Central

    Pham, Trang T.; Angus, Steven P.

    2013-01-01

    MAP3K1 is a member of the mitogen-activated protein kinase kinase kinase (MAP3K) family of serine/threonine kinases. MAP3K1 regulates JNK activation and is unique among human kinases in that it also encodes an E3 ligase domain that ubiquitylates c-Jun and ERK1/2. Full length MAP3K1 regulates cell migration and contributes to pro-survival signaling while its caspase 3-mediated cleavage generates a C-terminal kinase domain that promotes apoptosis. The critical function of MAP3K1 in cell fate decisions suggests that it may be a target for deregulation in cancer. Recent large-scale genomic studies have revealed that MAP3K1 copy number loss and somatic missense or nonsense mutations are observed in a significant number of different cancers, being most prominent in luminal breast cancer. The alteration of MAP3K1 in diverse cancer types demonstrates the importance of defining phenotypes for possible therapeutic targeting of tumor cell vulnerabilities created when MAP3K1 function is lost or gained. PMID:24386504

  16. Site and Watershed Mapping.

    ERIC Educational Resources Information Center

    Institute for Environmental Education, Cleveland, OH.

    Presented as part of a larger unit on watershed investigations are a slideshow script and a map and compass unit intended to help high school students better visualize the relationship between a water sampling site, the entire stream, community, and watershed. The script discusses features of a topographical map, shows how to read one, and…

  17. Temporal mapping and analysis

    NASA Technical Reports Server (NTRS)

    O'Hara, Charles G. (Inventor); Shrestha, Bijay (Inventor); Vijayaraj, Veeraraghavan (Inventor); Mali, Preeti (Inventor)

    2011-01-01

    A compositing process for selecting spatial data collected over a period of time, creating temporal data cubes from the spatial data, and processing and/or analyzing the data using temporal mapping algebra functions. In some embodiments, the temporal data cube is creating a masked cube using the data cubes, and computing a composite from the masked cube by using temporal mapping algebra.

  18. Acoustic mapping velocimetry

    NASA Astrophysics Data System (ADS)

    Muste, M.; Baranya, S.; Tsubaki, R.; Kim, D.; Ho, H.; Tsai, H.; Law, D.

    2016-05-01

    Knowledge of sediment dynamics in rivers is of great importance for various practical purposes. Despite its high relevance in riverine environment processes, the monitoring of sediment rates remains a major and challenging task for both suspended and bed load estimation. While the measurement of suspended load is currently an active area of testing with nonintrusive technologies (optical and acoustic), bed load measurement does not mark a similar progress. This paper describes an innovative combination of measurement techniques and analysis protocols that establishes the proof-of-concept for a promising technique, labeled herein Acoustic Mapping Velocimetry (AMV). The technique estimates bed load rates in rivers developing bed forms using a nonintrusive measurements approach. The raw information for AMV is collected with acoustic multibeam technology that in turn provides maps of the bathymetry over longitudinal swaths. As long as the acoustic maps can be acquired relatively quickly and the repetition rate for the mapping is commensurate with the movement of the bed forms, successive acoustic maps capture the progression of the bed form movement. Two-dimensional velocity maps associated with the bed form migration are obtained by implementing algorithms typically used in particle image velocimetry to acoustic maps converted in gray-level images. Furthermore, use of the obtained acoustic and velocity maps in conjunction with analytical formulations (e.g., Exner equation) enables estimation of multidirectional bed load rates over the whole imaged area. This paper presents a validation study of the AMV technique using a set of laboratory experiments.

  19. The Map Corner.

    ERIC Educational Resources Information Center

    Cheyney, Arnold B.; Capone, Donald L.

    This teaching resource is aimed at helping students develop the skills necessary to locate places on the earth. Designed as a collection of map skill exercises rather than a sequential program of study, this program expects that students have access to and some knowledge of how to use globes, maps, atlases, and encyclopedias. The volume contains 6…

  20. What do maps show?

    USGS Publications Warehouse

    ,

    1994-01-01

    The purpose of the teaching package is to help students understand and use maps. The U.S. Geological Survey (USGS) has provided the package as a service to educators so that more Americans will learn to understand the world of information on maps. Everything in the package teaches and reinforces geographic skills that are required in your curriculum.

  1. Multipole expansions and intense fields

    NASA Astrophysics Data System (ADS)

    Reiss, Howard R.

    1984-02-01

    In the context of two-body bound-state systems subjected to a plane-wave electromagnetic field, it is shown that high field intensity introduces a distinction between long-wavelength approximation and electric dipole approximation. This distinction is gauge dependent, since it is absent in Coulomb gauge, whereas in "completed" gauges of Göppert-Mayer type the presence of high field intensity makes electric quadrupole and magnetic dipole terms of importance equal to electric dipole at long wavelengths. Another consequence of high field intensity is that multipole expansions lose their utility in view of the equivalent importance of a number of low-order multipole terms and the appearance of large-magnitude terms which defy multipole categorization. This loss of the multipole expansion is gauge independent. Also gauge independent is another related consequence of high field intensity, which is the intimate coupling of center-of-mass and relative coordinate motions in a two-body system.

  2. Intensity patterns in eastern Asia.

    USGS Publications Warehouse

    Evernden, J.F.

    1983-01-01

    Investigation of the intensity patterns of earthquakes of E Asia indicates a strong regional pattern of attenuation parameter k and systematic correlation of this pattern with topography, P residuals, and level of seismicity as in the USA.-Author

  3. Neutral particle beam intensity controller

    DOEpatents

    Dagenhart, W.K.

    1984-05-29

    The neutral beam intensity controller is based on selected magnetic defocusing of the ion beam prior to neutralization. The defocused portion of the beam is dumped onto a beam dump disposed perpendicular to the beam axis. Selective defocusing is accomplished by means of a magnetic field generator disposed about the neutralizer so that the field is transverse to the beam axis. The magnetic field intensity is varied to provide the selected partial beam defocusing of the ions prior to neutralization. The desired focused neutral beam portion passes along the beam path through a defining aperture in the beam dump, thereby controlling the desired fraction of neutral particles transmitted to a utilization device without altering the kinetic energy level of the desired neutral particle fraction. By proper selection of the magnetic field intensity, virtually zero through 100% intensity control of the neutral beam is achieved.

  4. Seismic Hazard Assessment of Tehran Based on Arias Intensity

    SciTech Connect

    Amiri, G. Ghodrati; Mahmoodi, H.; Amrei, S. A. Razavian

    2008-07-08

    In this paper probabilistic seismic hazard assessment of Tehran for Arias intensity parameter is done. Tehran is capital and most populated city of Iran. From economical, political and social points of view, Tehran is the most significant city of Iran. Since in the previous centuries, catastrophic earthquakes have occurred in Tehran and its vicinity, probabilistic seismic hazard assessment of this city for Arias intensity parameter is useful. Iso-intensity contour lines maps of Tehran on the basis of different attenuation relationships for different earthquake periods are plotted. Maps of iso-intensity points in the Tehran region are presented using proportional attenuation relationships for rock and soil beds for 2 hazard levels of 10% and 2% in 50 years. Seismicity parameters on the basis of historical and instrumental earthquakes for a time period that initiate from 4th century BC and ends in the present time are calculated using Tow methods. For calculation of seismicity parameters, the earthquake catalogue with a radius of 200 km around Tehran has been used. SEISRISKIII Software has been employed. Effects of different parameters such as seismicity parameters, length of fault rupture relationships and attenuation relationships are considered using Logic Tree.

  5. Geologic map of Mars

    USGS Publications Warehouse

    Tanaka, Kenneth L.; Skinner, James A.; Dohm, James M.; Irwin, Rossman P.; Kolb, Eric J.; Fortezzo, Corey M.; Platz, Thomas; Michael, Gregory G.; Hare, Trent M.

    2014-01-01

    This global geologic map of Mars, which records the distribution of geologic units and landforms on the planet's surface through time, is based on unprecedented variety, quality, and quantity of remotely sensed data acquired since the Viking Orbiters. These data have provided morphologic, topographic, spectral, thermophysical, radar sounding, and other observations for integration, analysis, and interpretation in support of geologic mapping. In particular, the precise topographic mapping now available has enabled consistent morphologic portrayal of the surface for global mapping (whereas previously used visual-range image bases were less effective, because they combined morphologic and albedo information and, locally, atmospheric haze). Also, thermal infrared image bases used for this map tended to be less affected by atmospheric haze and thus are reliable for analysis of surface morphology and texture at even higher resolution than the topographic products.

  6. Bodily maps of emotions

    PubMed Central

    Nummenmaa, Lauri; Glerean, Enrico; Hari, Riitta; Hietanen, Jari K.

    2014-01-01

    Emotions are often felt in the body, and somatosensory feedback has been proposed to trigger conscious emotional experiences. Here we reveal maps of bodily sensations associated with different emotions using a unique topographical self-report method. In five experiments, participants (n = 701) were shown two silhouettes of bodies alongside emotional words, stories, movies, or facial expressions. They were asked to color the bodily regions whose activity they felt increasing or decreasing while viewing each stimulus. Different emotions were consistently associated with statistically separable bodily sensation maps across experiments. These maps were concordant across West European and East Asian samples. Statistical classifiers distinguished emotion-specific activation maps accurately, confirming independence of topographies across emotions. We propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps. Perception of these emotion-triggered bodily changes may play a key role in generating consciously felt emotions. PMID:24379370

  7. Iconicity as structure mapping

    PubMed Central

    Emmorey, Karen

    2014-01-01

    Linguistic and psycholinguistic evidence is presented to support the use of structure-mapping theory as a framework for understanding effects of iconicity on sign language grammar and processing. The existence of structured mappings between phonological form and semantic mental representations has been shown to explain the nature of metaphor and pronominal anaphora in sign languages. With respect to processing, it is argued that psycholinguistic effects of iconicity may only be observed when the task specifically taps into such structured mappings. In addition, language acquisition effects may only be observed when the relevant cognitive abilities are in place (e.g. the ability to make structural comparisons) and when the relevant conceptual knowledge has been acquired (i.e. information key to processing the iconic mapping). Finally, it is suggested that iconicity is better understood as a structured mapping between two mental representations than as a link between linguistic form and human experience. PMID:25092669

  8. Iconicity as structure mapping.

    PubMed

    Emmorey, Karen

    2014-09-19

    Linguistic and psycholinguistic evidence is presented to support the use of structure-mapping theory as a framework for understanding effects of iconicity on sign language grammar and processing. The existence of structured mappings between phonological form and semantic mental representations has been shown to explain the nature of metaphor and pronominal anaphora in sign languages. With respect to processing, it is argued that psycholinguistic effects of iconicity may only be observed when the task specifically taps into such structured mappings. In addition, language acquisition effects may only be observed when the relevant cognitive abilities are in place (e.g. the ability to make structural comparisons) and when the relevant conceptual knowledge has been acquired (i.e. information key to processing the iconic mapping). Finally, it is suggested that iconicity is better understood as a structured mapping between two mental representations than as a link between linguistic form and human experience.

  9. Bodily maps of emotions.

    PubMed

    Nummenmaa, Lauri; Glerean, Enrico; Hari, Riitta; Hietanen, Jari K

    2014-01-14

    Emotions are often felt in the body, and somatosensory feedback has been proposed to trigger conscious emotional experiences. Here we reveal maps of bodily sensations associated with different emotions using a unique topographical self-report method. In five experiments, participants (n = 701) were shown two silhouettes of bodies alongside emotional words, stories, movies, or facial expressions. They were asked to color the bodily regions whose activity they felt increasing or decreasing while viewing each stimulus. Different emotions were consistently associated with statistically separable bodily sensation maps across experiments. These maps were concordant across West European and East Asian samples. Statistical classifiers distinguished emotion-specific activation maps accurately, confirming independence of topographies across emotions. We propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps. Perception of these emotion-triggered bodily changes may play a key role in generating consciously felt emotions.

  10. Gamma radiation field intensity meter

    DOEpatents

    Thacker, L.H.

    1994-08-16

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode. 4 figs.

  11. Gamma radiation field intensity meter

    DOEpatents

    Thacker, Louis H.

    1994-01-01

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode.

  12. Gamma radiation field intensity meter

    DOEpatents

    Thacker, Louis H.

    1995-01-01

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode.

  13. High intensity protons in RHIC

    SciTech Connect

    Montag, C.; Ahrens, L.; Blaskiewicz, M.; Brennan, J. M.; Drees, K. A.; Fischer, W.; Huang, H.; Minty, M.; Robert-Demolaize, G.; Thieberger, P.; Yip, K.

    2012-01-05

    During the 2012 summer shutdown a pair of electron lenses will be installed in RHIC, allowing the beam-beam parameter to be increased by roughly 50 percent. To realize the corresponding luminosity increase bunch intensities have to be increased by 50 percent, to 2.5 {center_dot} 10{sup 11} protons per bunch. We list the various RHIC subsystems that are most affected by this increase, and propose beam studies to ensure their readiness. The proton luminosity in RHIC is presently limited by the beam-beam effect. To overcome this limitation, electron lenses will be installed in IR10. With the help of these devices, the headon beam-beam kick experienced during proton-proton collisions will be partially compensated, allowing for a larger beam-beam tuneshift at these collision points, and therefore increasing the luminosity. This will be accomplished by increasing the proton bunch intensity from the presently achieved 1.65 {center_dot} 10{sup 11} protons per bunch in 109 bunches per beam to 2.5 {center_dot} 10{sup 11}, thus roughly doubling the luminosity. In a further upgrade we aim for bunch intensities up to 3 {center_dot} 10{sup 11} protons per bunch. With RHIC originally being designed for a bunch intensity of 1 {center_dot} 10{sup 11} protons per bunch in 56 bunches, this six-fold increase in the total beam intensity by far exceeds the design parameters of the machine, and therefore potentially of its subsystems. In this note, we present a list of major subsystems that are of potential concern regarding this intensity upgrade, show their demonstrated performance at present intensities, and propose measures and beam experiments to study their readiness for the projected future intensities.

  14. Gamma radiation field intensity meter

    DOEpatents

    Thacker, L.H.

    1995-10-17

    A gamma radiation intensity meter measures dose rate of a radiation field. The gamma radiation intensity meter includes a tritium battery emitting beta rays generating a current which is essentially constant. Dose rate is correlated to an amount of movement of an electroscope element charged by the tritium battery. Ionizing radiation decreases the voltage at the element and causes movement. A bleed resistor is coupled between the electroscope support element or electrode and the ionization chamber wall electrode. 4 figs.

  15. Did you feel it? Community-made earthquake shaking maps

    USGS Publications Warehouse

    Wald, D.J.; Wald, L.A.; Dewey, J.W.; Quitoriano, Vince; Adams, Elisabeth

    2001-01-01

    Since the early 1990's, the magnitude and location of an earthquake have been available within minutes on the Internet. Now, as a result of work by the U.S. Geological Survey (USGS) and with the cooperation of various regional seismic networks, people who experience an earthquake can go online and share information about its effects to help create a map of shaking intensities and damage. Such 'Community Internet Intensity Maps' (CIIM's) contribute greatly in quickly assessing the scope of an earthquake emergency, even in areas lacking seismic instruments.

  16. Seismicity map of the state of North Carolina

    USGS Publications Warehouse

    Reagor, B.G.; Stover, C.W.; Algermissen, S.T.

    1987-01-01

    The latitude and longitude coordinates of each epicenter were rounded to the nearest tenth of a degree and sorted so that all identical locations were grouped and counted. These locations are represented on the map by a triangle. The number of earthquakes at each location is shown on the map by the arabic number to the right of the triangle. A Roman numeral to the left of a triangle is the maximum Modified Mercalli intensity (Wood and Neumann, 1931) of a11 earthquakes at that geographic location. The absence of an intensity value indicates that no intensities have been assigned to earthquakes at that location. The year shown below each triangle is the latest year for which the maximum intensity was recorded.

  17. Parametric Mapping of Contrasted Ovarian Transvaginal Sonography

    PubMed Central

    Korhonen, Katrina; Moore, Ryan; Lyshchik, Andrej; Fleischer, Arthur C.

    2014-01-01

    The purpose of this study was to assess the accuracy of parametric analysis of transvaginal contrast-enhanced ultrasound (TV-CEUS) for distinguishing benign versus malignant ovarian masses. A total of 48 ovarian masses (37 benign and 11 borderline/malignant) were examined with TV-CEUS (Definity, Lantheus, North Bilreca, MA; Philips iU22, Bothell, WA). Parametric images were created offline with a quantification software (Bracco Suisse SA, Geneva, Switzerland) with map color scales adjusted such that abnormal hemodynamics were represented by the color red and the presence of any red color could be used to differentiate benign and malignant tumors. Using these map color scales, low values of the perfusion parameter were coded in blue, and intermediate values of the perfusion parameter were coded in yellow. Additionally, for each individual color (red, blue, or yellow), a darker shade of that color indicated a higher intensity value. Our study found that the parametric mapping method was considerably more sensitive than standard ROI analysis for the detection of malignant tumors but was also less specific than standard ROI analysis. Parametric mapping allows for stricter cut-off criteria, as hemodynamics are visualized on a finer scale than ROI analyses, and as such, parametric maps are a useful addition to TV-CEUS analysis by allowing ROIs to be limited to areas of highest malignant potential. PMID:26002525

  18. Mental Mapping: A Classroom Strategy

    ERIC Educational Resources Information Center

    Solomon, Les

    1978-01-01

    Examines potential uses of mental maps in the classroom by reviewing research efforts, providing an example of the differences between mental maps of two student groups, and suggesting how to use mental maps in the geography curriculum. Mental mapping (or cognitive mapping) refers to individuals' processes of collecting, storing, and retrieving…

  19. Quasi-periodic Intensity Disturbances in Polar Plumes

    NASA Astrophysics Data System (ADS)

    JIAO, F.; Xia, L.; Li, B.; Li, X.; Mou, C.; Fu, H.

    2013-12-01

    It's known that polar coronal plumes appear to be hazy and ray-like structures. Quasi-periodic disturbances in polar plumes are often observed with Extreme-ultraviolet (EUV) images and are identified as alternating slanted ridges in the distance-time maps with periods of 10-30 minutes. Usually, their propagating speeds range from 60 to 150 km/s. We analyse the intensity variation above polar coronal holes with data from three SDO/AIA bandpass channels (171Å, 193Å, 304Å) by using wavelet analysis and FFT, and produce intensity power distribution images. We find that slender radial structures (which we call fine structures) in these images and their widths are only a few arcsec in the plume and inter-plume regions. We propose that these fine structures could depict the orientation of the magnetic field in the polar coronal hole. Similar to the previous research, intensity disturbances along fine structures have periods of 15-20 min. Besides, the propagating speed of intensity disturbances along the fine structures ranges from more than a dozen kilometers per second at just above the solar limb to 150 km/s around 140'' above the limb. It is easy to identify the 304Å jets in the power images. We find that the intensity variation of jets obtained from the distance-time maps of the 304 line is often inversely correlated with that obtained from the 171 line (which is formed at 0.8MK) at the same position, which suggests that cool jets may be the driving source of intensity disturbances of hotter lines. This study may contribute to our understanding of the fine structures of plumes, the magnetic fields in polar corona hole and the acceleration of the fast solar wind.

  20. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  1. Mapping the Orion Molecular Cloud Complex in Radio Frequencies

    NASA Astrophysics Data System (ADS)

    Castelaz, Michael W.; Lemly, C.

    2013-01-01

    The purpose of this research project was to create a large-scale intensity map of the Orion Molecular Cloud Complex at a radio frequency of 1420 MHz. A mapping frequency of 1420 MHz was chosen because neutral hydrogen, which is the primary component of the Orion Molecular Complex, naturally emits radio waves at this frequency. The radio spectral data for this project were gathered using a 4.6-m radio telescope whose spectrometer was tuned to 1420 MHz and whose beam width was 2.7 degrees. The map created for this project consisted of an eight-by-eight grid centered on M42 spanning 21.6 degrees per side. The grid consisted of 64 individual squares spanning 2.7 degrees per side (corresponding to the beam width of the telescope). Radio spectra were recorded for each of these individual squares at an IF gain of 18. Each spectrum consisted of intensity on an arbitrary scale from 0 to 10 plotted as a function frequencies ranging from -400 kHz to +100 kHz around the origin of 1420 MHz. The data from all 64 radio spectra were imported into Wolfram Alpha, which was used to fit Gaussian functions to the data. The peak intensity and the frequency at which this peak intensity occurs could then be extracted from the Gaussian functions. Other helpful quantities that could be calculated from the Gaussian functions include flux (integral of Gaussian function over frequency range), average value of intensity (flux integral divided by frequency range), and half maximum of intensity. Because all of the radio spectra were redshifted, the velocities of the hydrogen gas clouds of the Orion Molecular Cloud Complex could be calculated using the Doppler equation. The data extracted from the Gaussian functions were then imported into Mathcad to create 2D grayscale maps with right ascension (RA) on the x-axis, declination on the y-axis, and intensity (or flux, etc.) represented on a scale from black to white (with white representing the highest intensities). These 2D maps were then imported

  2. State hydrologic unit maps

    USGS Publications Warehouse

    Seaber, P.R.; Kapinos, F.P.; Knapp, G.L.

    1984-01-01

    A set of maps depicting approved boundaries of, and numerical codes for, river-basin units of the United States has been developed by the U.S. Geological Survey. These ' State Hydrologic Unit Maps ' are four-color maps that present information on drainage, culture, hydrography, and hydrologic boundaries and codes: (1) the 21 major water-resources regions and the 222 subregions designated by the U.S. Water Resources Council; (2) the 352 accounting units of the U.S. Geological Survey 's National Water Data Network; and (3) the 2,149 cataloging units of the U.S. Geological Survey 's Catalog of Information on Water Data. The maps are plotted on the Geological Survey State base-map series at a scale of 1:500,000 and, except for Alaska, depict hydrologic unit boundaries for all drainage basins greater than 700 mi squared (1,813 km squared). A complete list of all the hydrologic units, along with their drainage areas, their names, and the names of the States or outlying areas in which they reside, is contained in the report. These maps and associated codes provide a standardized base for use by water-resources organizations in locating, storing, retrieving, and exchanging hydrologic data. The Hydrologic Unit Codes shown on the maps have been approved as a Federal Information Processing Standard for use by the Federal establishment. (USGS)

  3. Color on emergency mapping

    NASA Astrophysics Data System (ADS)

    Jiang, Lili; Qi, Qingwen; Zhang, An

    2007-06-01

    There are so many emergency issues in our daily life. Such as typhoons, tsunamis, earthquake, fires, floods, epidemics, etc. These emergencies made people lose their lives and their belongings. Every day, every hour, even every minute people probably face the emergency, so how to handle it and how to decrease its hurt are the matters people care most. If we can map it exactly before or after the emergencies; it will be helpful to the emergency researchers and people who live in the emergency place. So , through the emergency map, before emergency is occurring we can predict the situation, such as when and where the emergency will be happen; where people can refuge, etc. After disaster, we can also easily assess the lost, discuss the cause and make the lost less. The primary effect of mapping is offering information to the people who care about the emergency and the researcher who want to study it. Mapping allows the viewers to get a spatial sense of hazard. It can also provide the clues to study the relationship of the phenomenon in emergency. Color, as the basic element of the map, it can simplify and clarify the phenomenon. Color can also affects the general perceptibility of the map, and elicits subjective reactions to the map. It is to say, structure, readability, and the reader's psychological reactions can be affected by the use of color.

  4. Coastal mapping handbook

    USGS Publications Warehouse

    ,; ,; Ellis, Melvin Y.

    1978-01-01

    Passage of the Coastal Zone Management Act of 1972 focused attention on the Nation's coastal land and water areas. As plans for more effective management of the coastal zone evolved, it soon became apparent that improved maps and charts of these areas were needed. This handbook was prepared with the requirements of the entire coastal community in mind, giving greatest attention to the needs of coastal zone managers and planners at the State and local levels. Its principal objective is to provide general information and guidance; it is neither a textbook nor a technical manual, but rather a primer on coastal mapping. This handbook should help planners and managers of coastal programs to determine their mapping requirements, select the best maps and charts for their particular needs, and to deal effectively with personnel who gather data and prepare maps. The sections on "Sources of Assistance and Advice" and "Product and Data Sources" should be especially useful to all involved in mapping the coastal zone. Brief summaries of the mapping efforts of several State coastal zone management programs are included. "Future outlook" discusses anticipated progress and changes in mapping procedures and techniques. Illustrations are inserted, where appropriate, to illustrate the products and equipment discussed. Because of printing restrictions, the colors in map illustrations may vary from those in the original publication. The appendixes include substantial material which also should be of interest. In addition a glossary and an index are included to provide easy and quick access to the terms and concepts used in the text. For those interested in more technical detail than is provided in this handbook, the "Selected references" will be useful. Also, the publications of the professional societies listed in appendix 4 will provide technical information in detail.

  5. Topographic map symbols

    USGS Publications Warehouse

    ,

    2005-01-01

    Interpreting the colored lines, areas, and other symbols is the first step in using topographic maps. Features are shown as points, lines, or areas, depending on their size and extent. For example, individual houses may be shown as small black squares. For larger buildings, the actual shapes are mapped. In densely built-up areas, most individual buildings are omitted and an area tint is shown. On some maps, post offices, churches, city halls, and other landmark buildings are shown within the tinted area.

  6. Mapping the Baby Universe

    NASA Technical Reports Server (NTRS)

    Wanjek, Christopher

    2003-01-01

    In June, NASA plans to launch the Microwave Anisotropy Probe (MAP) to survey the ancient radiation in unprecedented detail. MAP will map slight temperature fluctuations within the microwave background that vary by only 0.00001 C across a chilly radiation that now averages 2.73 C above absolute zero. The temperature differences today point back to density differences in the fiery baby universe, in which there was a little more matter here and a little less matter there. Areas of slightly enhanced density had stronger gravity than low-density areas. The high-density areas pulled back on the background radiation, making it appear slightly cooler in those directions.

  7. The CHUVA Lightning Mapping Campaign

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Bailey, Jeffrey C.; Carey, Lawrence D.; Hoeller, Hartmut; Albrecht, Rachel I.; Morales, Carlos; Pinto, Osmar; Saba, Marcelo M.; Naccarato, Kleber; Hembury, Nikki; Nag, Amitabh; Heckman, Stan; Holzworth, Robert H.; Rudlosky, Scott D.; Betz, Hans-Dieter; Said, Ryan; Rauenzahn, Kim

    2011-01-01

    The primary science objective for the CHUVA lightning mapping campaign is to combine measurements of total lightning activity, lightning channel mapping, and detailed information on the locations of cloud charge regions of thunderstorms with the planned observations of the CHUVA (Cloud processes of tHe main precipitation systems in Brazil: A contribUtion to cloud resolVing modeling and to the GPM (GlobAl Precipitation Measurement) field campaign. The lightning campaign takes place during the CHUVA intensive observation period October-December 2011 in the vicinity of S o Luiz do Paraitinga with Brazilian, US, and European government, university and industry participants. Total lightning measurements that can be provided by ground-based regional 2-D and 3-D total lightning mapping networks coincident with overpasses of the Tropical Rainfall Measuring Mission Lightning Imaging Sensor (LIS) and the SEVIRI (Spinning Enhanced Visible and Infrared Imager) on the Meteosat Second Generation satellite in geostationary earth orbit will be used to generate proxy data sets for the next generation US and European geostationary satellites. Proxy data, which play an important role in the pre-launch mission development and in user readiness preparation, are used to develop and validate algorithms so that they will be ready for operational use quickly following the planned launch of the GOES-R Geostationary Lightning Mapper (GLM) in 2015 and the Meteosat Third Generation Lightning Imager (LI) in 2017. To date there is no well-characterized total lightning data set coincident with the imagers. Therefore, to take the greatest advantage of this opportunity to collect detailed and comprehensive total lightning data sets, test and validate multi-sensor nowcasting applications for the monitoring, tracking, warning, and prediction of severe and high impact weather, and to advance our knowledge of thunderstorm physics, extensive measurements from lightning mapping networks will be collected

  8. Human cDNA mapping using fluorescence in situ hybridization

    SciTech Connect

    Korenberg, J.R.

    1993-03-04

    Genetic mapping is approached using the techniques of high resolution fluorescence in situ hybridization (FISH). This technology and the results of its application are designed to rapidly generate whole genome as tool box of expressed sequence to speed the identification of human disease genes. The results of this study are intended to dovetail with and to link the results of existing technologies for creating backbone YAC and genetic maps. In the first eight months, this approach generated 60--80% of the expressed sequence map, the remainder expected to be derived through more long-term, labor-intensive, regional chromosomal gene searches or sequencing. The laboratory has made significant progress in the set-up phase, in mapping fetal and adult brain and other cDNAs, in testing a model system for directly linking genetic and physical maps using FISH with small fragments, in setting up a database, and in establishing the validity and throughput of the system.

  9. Uniqueness of the momentum map

    NASA Astrophysics Data System (ADS)

    Esposito, Chiara; Nest, Ryszard

    2016-08-01

    We give a detailed discussion of existence and uniqueness of the momentum map associated to Poisson Lie actions, which was defined by Lu. We introduce a weaker notion of momentum map, called infinitesimal momentum map, which is defined on one-forms and we analyze its integrability to the Lu's momentum map. Finally, the uniqueness of the Lu's momentum map is studied by describing, explicitly, the tangent space to the space of momentum maps.

  10. The distribution of modified mercalli intensity in the 18 April 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.

    2008-01-01

    We analyze Boatwright and Bundock's (2005) modified Mercalli intensity (MMI) map for the 18 April 1906 San Francisco earthquake, reviewing their interpretation of the MMI scale and testing their correlation of 1906 cemetery damage with MMI intensity. We consider in detail four areas of the intensity map where Boatwright and Bundock (2005) added significantly to the intensity descriptions compiled by Lawson (1908). We show that the distribution of off-fault damage in Sonoma County suggests that the rupture velocity approached the P-wave velocity along Tomales Bay. In contrast, the falloff of intensity with distance from the fault appears approximately constant throughout Mendocino County. The intensity in Humboldt County appears somewhat higher than the intensity in Mendocino County, suggesting that the rupture process at the northern end of the rupture was relatively energetic and that there was directivity consistent with a subsonic rupture velocity on the section of the fault south of Shelter Cove. Finally, we show that the intensity sites added in Santa Cruz County change the intensity distribution so that it decreases gradually along the southeastern section of rupture from Corralitos to San Juan Bautista and implies that the stress release on this section of rupture was relatively low.

  11. Contracting for intensive care services.

    PubMed

    Dorman, S

    1996-01-01

    Purchasers will increasingly expect clinical services in the NHS internal market to provide objective measures of their benefits and cost effectiveness in order to maintain or develop current funding levels. There is limited scientific evidence to demonstrate the clinical effectiveness of intensive care services in terms of mortality/morbidity. Intensive care is a high-cost service and studies of cost-effectiveness need to take account of case-mix variations, differences in admission and discharge policies, and other differences between units. Decisions over development or rationalisation of intensive care services should be based on proper outcome studies of well defined patient groups. The purchasing function itself requires development in order to support effective contracting. PMID:9873335

  12. Intensity of tennis match play

    PubMed Central

    Fernandez, J; Mendez‐Villanueva, A; Pluim, B M

    2006-01-01

    This review focuses on the characteristics of tennis players during match play and provides a greater insight into the energy demands of tennis. A tennis match often lasts longer than an hour and in some cases more than five hours. During a match there is a combination of periods of maximal or near maximal work and longer periods of moderate and low intensity activity. Match intensity varies considerably depending on the players' level, style, and sex. It is also influenced by factors such as court surface and ball type. This has important implications for the training of tennis players, which should resemble match intensity and include interval training with appropriate work to rest ratios. PMID:16632566

  13. DAM - detection and mapping

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Integrated set of manual procedures, computer programs, and graphic devices processes multispectral scanner data from orbiting Landsat into precisely registered and formatted maps of surface water and other resources at variety of scales, sheet formats, and tick intervals.

  14. Pictometry digital video mapping

    NASA Astrophysics Data System (ADS)

    Ciampa, John A.

    1995-09-01

    Pictometry is a proprietary digital imaging process which computationally maps each pixel of a digital land image to actual geographic coordinates, so that features in a mosaic of land images may be located and or measured.

  15. Interpreting Weather Maps.

    ERIC Educational Resources Information Center

    Smith, P. Sean; Ford, Brent A.

    1994-01-01

    Presents a brief introduction of our atmosphere, a guide to reading and interpreting weather maps, and a set of activities to facilitate teachers in helping to enhance student understanding of the Earth's atmosphere. (ZWH)

  16. Barrier Island Hazard Mapping.

    ERIC Educational Resources Information Center

    Pilkey, Orrin H.; Neal, William J.

    1980-01-01

    Describes efforts to evaluate and map the susceptibility of barrier islands to damage from storms, erosion, rising sea levels and other natural phenomena. Presented are criteria for assessing the safety and hazard potential of island developments. (WB)

  17. Obesity Prevalence Maps

    MedlinePlus

    ... Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs Adult Obesity Prevalence Maps ... Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs File Formats Help: How ...

  18. Mapping Earth Science Concepts.

    ERIC Educational Resources Information Center

    McDuffie, Thomas E., Jr.; Van Dine, William E.

    1978-01-01

    Presents two experiments concerned with mapping skills. Directions are given for calculating the circumference of the earth and for developing a model of the solar system using familiar territory as a frame of reference. (MA)

  19. Dating the Vinland Map

    ScienceCinema

    None

    2016-07-12

    Scientists from Brookhaven National Laboratory, the University of Arizona, and the Smithsonian Institution used carbon-dating technology to determine the age of a controversial parchment that might be the first-ever map of North America.

  20. Irrigation on Topographic Maps.

    ERIC Educational Resources Information Center

    Raitz, Karl B.

    1979-01-01

    Describes how study of irrigation practices on topographic maps can help students in introductory high school and college geography courses understand man and land relationships to geography. (Author/DB)

  1. enceladus_stress_map

    NASA Video Gallery

    This is a map of the changing stress on the surface of Enceladus' icy crust from the wobble and gravitational tides. Blue lines show the direction of forces pulling the crust apart, and red lines s...

  2. Pitfalls in homozygosity mapping.

    PubMed

    Miano, M G; Jacobson, S G; Carothers, A; Hanson, I; Teague, P; Lovell, J; Cideciyan, A V; Haider, N; Stone, E M; Sheffield, V C; Wright, A F

    2000-11-01

    There is much interest in use of identity-by-descent (IBD) methods to map genes, both in Mendelian and in complex disorders. Homozygosity mapping provides a rapid means of mapping autosomal recessive genes in consanguineous families by identifying chromosomal regions that show homozygous IBD segments in pooled samples. In this report, we point out some potential pitfalls that arose during the course of homozygosity mapping of the enhanced S-cone syndrome gene, resulting from (1) unexpected allelic heterogeneity, so that the region containing the disease locus was missed as a result of pooling; (2) identification of a homozygous IBD region unrelated to the disease locus; and (3) the potential for inflation of LOD scores as a result of underestimation of the extent of inbreeding, which Broman and Weber suggest may be quite common.

  3. MAPPING IN MICRONESIA.

    USGS Publications Warehouse

    Olsen, Randle W.; Swinnerton, J.R.

    1984-01-01

    The U. S. Geological Survey has recently completed a series of new topographic maps of Micronesia in cooperation with the Trust Territory of the Pacific Islands, the Federal agency administering the islands. Monocolor 1:10,000-scale manuscripts were compiled, from which 1:25,000-scale metric quadrangles were derived with symbology consistent with USGS quadrangle mapping. The publication of these new maps coincides with the impending political changes resulting from self-determination referendums held in Micronesia. Local sources have helped considerably with field logistics and resolution of geographic name controversies. Technical aspects of this project included development of tropical feature symbology, location of cadastral subdivisions and associated boundaries and mapping of many outlying coral reefs.

  4. How to Map Noise.

    PubMed

    Hinton, John

    2002-01-01

    Noise mapping is a method of presenting complex noise information in a clear and simple way either on a physical map or in a database. This mapping information can be either calculated or measured using a variety of techniques and methods. Furthermore, the results of such exercises can be presented in many different ways and used for a number of different purposes. This paper attempts to examine these issues in the light of the "mapping requirements" outlined in the recently proposed Directive of the European Parliament and of the Council, relating to the Assessment and Management of Environmental Noise (Comm (2000) 468 final). This proposed Directive was laid before the Parliament and Council in the autumn of 2000. The First Reading of the proposal was successfully negotiated just before Christmas 2000. The Second Reading is likely to commence shortly.

  5. Reading angles in maps.

    PubMed

    Izard, Véronique; O'Donnell, Evan; Spelke, Elizabeth S

    2014-01-01

    Preschool children can navigate by simple geometric maps of the environment, but the nature of the geometric relations they use in map reading remains unclear. Here, children were tested specifically on their sensitivity to angle. Forty-eight children (age 47:15-53:30 months) were presented with fragments of geometric maps, in which angle sections appeared without any relevant length or distance information. Children were able to read these map fragments and compare two-dimensional to three-dimensional angles. However, this ability appeared both variable and fragile among the youngest children of the sample. These findings suggest that 4-year-old children begin to form an abstract concept of angle that applies both to two-dimensional and three-dimensional displays and that serves to interpret novel spatial symbols. PMID:23647223

  6. DMR 'Map of the Early Universe.'

    NASA Technical Reports Server (NTRS)

    2002-01-01

    DMR 'Map of the Early Universe.' This false-color image shows tiny variations in the intensity of the cosmic microwave background measured in four years of observations by the Differential Microwave Radiometers on NASA's Cosmic Background Explorer (COBE). The cosmic microwave background is widely believed to be a remnant of the Big Bang; the blue and red spots correspond to regions of greater or lesser density in the early Universe. These 'fossilized' relics record the distribution of matter and energy in the early Universe before the matter became organized into stars and galaxies. While the initial discovery of variations in the intensity of the CMB (made by COBE in 1992) was based on a mathematical examination of the data, this picture of the sky from the full four-year mission gives an accurate visual impression of the data. The features traced in this map stretch across the visible Universe: the largest features seen by optical telescopes, such as the 'Great Wall' of galaxies, would fit neatly within the smallest feature in this map. (See Bennett et al. 1996, ApJ, 464, L1 and references therein for details.)

  7. Project of Near-Real-Time Generation of ShakeMaps and a New Hazard Map in Austria

    NASA Astrophysics Data System (ADS)

    Jia, Yan; Weginger, Stefan; Horn, Nikolaus; Hausmann, Helmut; Lenhardt, Wolfgang

    2016-04-01

    Target-orientated prevention and effective crisis management can reduce or avoid damage and save lives in case of a strong earthquake. To achieve this goal, a project for automatic generated ShakeMaps (maps of ground motion and shaking intensity) and updating the Austrian hazard map was started at ZAMG (Zentralanstalt für Meteorologie und Geodynamik) in 2015. The first goal of the project is set for a near-real-time generation of ShakeMaps following strong earthquakes in Austria to provide rapid, accurate and official information to support the governmental crisis management. Using newly developed methods and software by SHARE (Seismic Hazard Harmonization in Europe) and GEM (Global Earthquake Model), which allows a transnational analysis at European level, a new generation of Austrian hazard maps will be ultimately calculated. More information and a status of our project will be given by this presentation.

  8. Irreversible quantum baker map.

    PubMed

    Łoziński, Artur; Pakoński, Prot; Zyczkowski, Karol

    2002-12-01

    We propose a generalization of the model of classical baker map on the torus, in which the images of two parts of the phase space do overlap. This transformation is irreversible and cannot be quantized by means of a unitary Floquet operator. A corresponding quantum system is constructed as a completely positive map acting in the space of density matrices. We investigate spectral properties of this superoperator and their link with the increase of the entropy of initially pure states.

  9. Hydrologic unit maps

    USGS Publications Warehouse

    Seaber, Paul R.; Kapinos, F. Paul; Knapp, George L.

    1987-01-01

    A set of maps depicting approved boundaries of, and numerical codes for, river-basin units of the United States has been developed by the U.S . Geological Survey. These 'Hydrologic Unit Maps' are four-color maps that present information on drainage, culture, hydrography, and hydrologic boundaries and codes of (1) the 21 major water-resources regions and the 222 subregions designated by the U.S . Water Resources Council, (2) the 352 accounting units of the U.S. Geological Survey's National Water Data Network, and (3) the 2,149 cataloging units of the U.S . Geological Survey's 'Catalog of information on Water Data:' The maps are plotted on the Geological Survey State base-map series at a scale of 1 :500,000 and, except for Alaska, depict hydrologic unit boundaries for all drainage basins greater than 700 square miles (1,813 square kilometers). A complete list of all the hydrologic units, along with their drainage areas, their names, and the names of the States or outlying areas in which they reside, is contained in the report. These maps and associated codes provide a standardized base for use by water-resources organizations in locating, storing, retrieving, and exchanging hydrologic data, in indexing and inventorying hydrologic data and information, in cataloging water-data acquisition activities, and in a variety of other applications. Because the maps have undergone extensive review by all principal Federal, regional, and State water-resource agencies, they are widely accepted for use in planning and describing water-use and related land-use activities, and in geographically organizing hydrologic data . Examples of these uses are given in the report . The hydrologic unit codes shown on the maps have been approved as a Federal Information Processing Standard for use by the Federal establishment.

  10. Mapping the human genome

    SciTech Connect

    Annas, G.C.; Elias, S.

    1992-01-01

    This article is a review of the book Mapping the Human Genome: Using Law and Ethics as Guides, edited by George C. Annas and Sherman Elias. The book is a collection of essays on the subject of using ethics and laws as guides to justify human gene mapping. It addresses specific issues such problems related to eugenics, patents, insurance as well as broad issues such as the societal definitions of normality.

  11. Wind Resource Maps (Postcard)

    SciTech Connect

    Not Available

    2011-07-01

    The U.S. Department of Energy's Wind Powering America initiative provides high-resolution wind maps and estimates of the wind resource potential that would be possible from development of the available windy land areas after excluding areas unlikely to be developed. This postcard is a marketing piece that stakeholders can provide to interested parties; it will guide them to Wind Powering America's online wind energy resource maps.

  12. Flame Speed and Spark Intensity

    NASA Technical Reports Server (NTRS)

    Randolph, D W; Silsbee, F B

    1925-01-01

    This report describes a series of experiments undertaken to determine whether or not the electrical characteristics of the igniting spark have any effect on the rapidity of flame spread in the explosive gas mixtures which it ignites. The results show very clearly that no such effect exists. The flame velocity in carbon-monoxide oxygen, acetylene oxygen, and gasoline-air mixtures was found to be unaffected by changes in spark intensity from sparks which were barely able to ignite the mixture up to intense condenser discharge sparks having fifty time this energy. (author)

  13. Underwater measurements of muon intensity

    NASA Technical Reports Server (NTRS)

    Fedorov, V. M.; Pustovetov, V. P.; Trubkin, Y. A.; Kirilenkov, A. V.

    1985-01-01

    Experimental measurements of cosmic ray muon intensity deep underwater aimed at determining a muon absorption curve are of considerable interest, as they allow to reproduce independently the muon energy spectrum at sea level. The comparison of the muon absorption curve in sea water with that in rock makes it possible to determine muon energy losses caused by nuclear interactions. The data available on muon absorption in water and that in rock are not equivalent. Underground measurements are numerous and have been carried out down to the depth of approx. 15km w.e., whereas underwater muon intensity have been measured twice and only down to approx. 3km deep.

  14. Geologic Mapping of Mars

    NASA Astrophysics Data System (ADS)

    Price, Katherine H.

    1998-05-01

    Planetary geologic mapping involves integrating a terrestrial-based understanding of surface and subsurface processes and mapping principles to investigate scientific questions. Mars mappers must keep in mind that physical processes, such as wind and flowing water on Mars, are or were different from terrestrial processes because the planetary atmospheres have changed differently over time. Geologic mapping of Mars has traditionally been done by hand using overlays on photomosaics of Viking Orbiter and Mariner images. Photoclinometry and shadow measurements have been used to determine elevations, and the distribution and size of craters have been used to determine the relative ages of surfaces- more densely cratered surfaces are older. Some mappers are now using computer software (ranging from Photoshop to ArcInfo) to facilitate mapping, though their applications must be carefully executed so that registration of the images remains true. Images and some mapping results are now available on the internet, and new data from recent missions to Mars (Pathfinder and Surveyor) will offer clarifying information to mapping efforts. This paper consists chiefly of pictures and diagrams.

  15. Creating Heliophysics Concept Maps

    NASA Astrophysics Data System (ADS)

    Ali, N. A.; Peticolas, L. M.; Paglierani, R.; Mendez, B. J.

    2011-12-01

    The Center for Science Education at University of California Berkeley's Space Sciences Laboratory is creating concept maps for Heliophysics and would like to get input from scientists. The purpose of this effort is to identify key concepts related to Heliophysics and map their progression to show how students' understanding of Heliophysics might develop from Kindergarten through higher education. These maps are meant to tie into the AAAS Project 2061 Benchmarks for Scientific Literacy and National Science Education Standards. It is hoped that the results of this effort will be useful for curriculum designers developing Heliophysics-related curriculum materials and classroom teachers using Heliophysics materials. The need for concept maps was identified as a result of product analysis undertaken by the NASA Heliophysics Forum Team. The NASA Science Education and Public Outreach Forums have as two of their goals to improve the characterization of the contents of the Science Mission Directorate and Public Outreach (SMD E/PO) portfolio (Objective 2.1) and assist SMD in addressing gaps in the portfolio of SMD E/PO products and project activities (Objective 2.2). An important part of this effort is receiving feedback from solar scientists regarding the inclusion of key concepts and their progression in the maps. This session will introduce the draft concept maps and elicit feedback from scientists.

  16. Global Water Maps

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.; Salas, F.; Teng, W. L.

    2013-12-01

    A global water map is a coverage of the earth that describes the state of water circulation in a phase of the hydrologic cycle. This information can be published as a map showing the state of the water variable at a particular point in time, or charted as a time series showing the temporal variation of that variable at a point in space. Such maps can be created through the NASA Land Data Assimilation System (LDAS) for precipitation, evaporation, soil moisture, and other parameters describing the vertical exchange of water between the land and atmosphere, through a combination of observations and simulation modeling. Point observations of water variables such as precipitation and streamflow are carried out by local hydrologic measurement agencies associated with a particular area. These point observations are now being published as web services in the WaterML language and federated using the Global Earth Observing System of Systems to enable the publication of water observations maps for these variables. By combining water maps derived from LDAS with those from federated point observations, a deeper understanding of global water conditions and movement can be created. This information should be described in a Hydrologic Data Book that specifies the information content of each of these map layers so that they can be appropriately used and combined.

  17. Interactive Metro Map Editing.

    PubMed

    Wang, Yu-Shuen; Peng, Wan-Yu

    2016-02-01

    Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.

  18. Mars synthetic topographic mapping

    USGS Publications Warehouse

    Wu, S.S.C.

    1978-01-01

    Topographic contour maps of Mars are compiled by the synthesis of data acquired from various scientific experiments of the Mariner 9 mission, including S-band radio-occulation, the ultraviolet spectrometer (UVS), the infrared radiometer (IRR), the infrared interferometer spectrometer (IRIS) and television imagery, as well as Earth-based radar information collected at Goldstone, Haystack, and Arecibo Observatories. The entire planet is mapped at scales of 1:25,000,000 and 1:25,000,000 using Mercator, Lambert, and polar stereographic map projections. For the computation of map projections, a biaxial spheroid figure is adopted. The semimajor and semiminor axes are 3393.4 and 3375.7 km, respectively, with a polar flattening of 0.0052. For the computation of elevations, a topographic datum is defined by a gravity field described in terms of spherical harmonics of fourth order and fourth degree combined with a 6.1-mbar occulation pressure surface. This areoid can be approximated by a triaxial ellipsoid with semimajor axes of A = 3394.6 km and B = 3393.3 km and a semiminor axis of C = 3376.3 km. The semimajor axis A intersects the Martian surface at longitude 105??W. The dynamic flattening of Mars is 0.00525. The contour intercal of the maps is 1 km. For some prominent features where overlapping pictures from Mariner 9 are available, local contour maps at relatively larger scales were also compiled by photogrammetric methods on stereo plotters. ?? 1978.

  19. The National Map - Orthoimagery

    USGS Publications Warehouse

    Mauck, James; Brown, Kim; Carswell, William J.

    2009-01-01

    Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.

  20. Global geological map of Venus

    NASA Astrophysics Data System (ADS)

    Ivanov, Mikhail A.; Head, James W.

    2011-10-01

    determined with the available data sets) involved intense deformation and building of regions of thicker crust (tessera). This was followed by the Guineverian Period. Distributed deformed plains, mountain belts, and regional interconnected groove belts characterize the first part and the vast majority of coronae began to form during this time. The second part of the Guineverian Period involved global emplacement of vast and mildly deformed plains of volcanic origin. A period of global wrinkle ridge formation largely followed the emplacement of these plains. The third phase (Atlian Period) involved the formation of prominent rift zones and fields of lava flows unmodified by wrinkle ridges that are often associated with large shield volcanoes and, in places, with earlier-formed coronae. Atlian volcanism may continue to the present. About 70% of the exposed surface of Venus was resurfaced during the Guineverian Period and only about 16% during the Atlian Period. Estimates of model absolute ages suggest that the Atlian Period was about twice as long as the Guineverian and, thus, characterized by significantly reduced rates of volcanism and tectonism. The three major phases of activity documented in the global stratigraphy and geological map, and their interpreted temporal relations, provide a basis for assessing the geodynamical processes operating earlier in Venus history that led to the preserved record.